Verified, runnable AI solutions - with source code
Each solution includes an implementation you can run, plus evidence (screenshots/logs) proving it works. Subscribe to unlock full solution details and downloads.
- Clear architecture + run instructions
- Downloadable source code (zip)
- Evidence screenshots/logs for verification
- Full details and all solutions update
Latest solutions
View allAgentic Workflows in Spring Boot: Tool Calling, Idempotency, and Durable Runs
VerifiedA runnable workflow engine for LLM tool-calling with durable run state, retries, idempotency keys, and human-in-the-loop checkpoints.
LLM Eval Harness for Spring Boot: Golden Sets, Regression Tests, and CI Reports
VerifiedA runnable evaluation harness that tests prompts/RAG outputs against golden datasets, computes metrics, and generates CI-friendly reports and evidence packs.
Secure Document Ingestion for RAG: Chunking, Deduplication, and PII Redaction
VerifiedA runnable ingestion pipeline that extracts text, deduplicates, redacts PII, generates embeddings, and produces evidence artifacts for compliance and quality.
Vector-based Recommendations in Spring Boot: pgvector Similarity + Business Rules
VerifiedA runnable recommendation service combining vector similarity with deterministic business rules and explainable ranking.
Kafka Data Enrichment with LLM: Idempotent Consumers, DLQ, and Tracing
VerifiedA runnable event-driven pipeline that enriches Kafka messages using LLM calls with idempotent processing, DLQ handling, and end-to-end tracing.
SQL Assistant for Spring Boot: Guardrails, Read-only Enforcement, and Audit Logs
VerifiedA runnable assistant that converts natural language questions into SQL with schema grounding, read-only enforcement, and full audit logging.
Streaming Chat with SSE + “StopResume” + Conversation Memory
VerifiedA runnable chat service that streams LLM tokens over SSE, supports cancellation and resume, and persists conversation state safely.
LLM Gateway for Spring Boot: Multi-tenant API Keys, Quotas, and Cost Controls
VerifiedA production-grade LLM proxy that enforces per-tenant API keys, rate limits, token budgets, caching, and audit logging.
LLM Observability for Spring AI OpenTelemetry Tracing to Langfuse (Self-Hosted)
VerifiedThis solution provides a runnable Spring Boot setup that instruments Spring AI with OpenTelemetry and exports traces to a self-hosted Langfuse stack.
Hybrid Retrieval RAG in PostgreSQL Keyword + pgvector + Rank Fusion (Spring Boot)
VerifiedThis solution implements hybrid retrieval by combining two independent PostgreSQL-based retrieval paths.