Solutions
This solution provides a runnable Spring Boot setup that instruments Spring AI with OpenTelemetry and exports traces to a self-hosted Langfuse stack.
This solution implements hybrid retrieval by combining two independent PostgreSQL-based retrieval paths.
A runnable event-driven pipeline that enriches Kafka messages using LLM calls with idempotent processing, DLQ handling, and end-to-end tracing.
A production-grade LLM proxy that enforces per-tenant API keys, rate limits, token budgets, caching, and audit logging.
A runnable evaluation harness that tests prompts/RAG outputs against golden datasets, computes metrics, and generates CI-friendly reports and evidence packs.
A runnable workflow engine for LLM tool-calling with durable run state, retries, idempotency keys, and human-in-the-loop checkpoints.
A runnable recommendation service combining vector similarity with deterministic business rules and explainable ranking.
A runnable chat service that streams LLM tokens over SSE, supports cancellation and resume, and persists conversation state safely.
A runnable assistant that converts natural language questions into SQL with schema grounding, read-only enforcement, and full audit logging.
A runnable ingestion pipeline that extracts text, deduplicates, redacts PII, generates embeddings, and produces evidence artifacts for compliance and quality.