Solutions

Engineering-first solutions with verified execution evidence.
10 solutions found
Register account to unlock full details and downloads

This solution provides a runnable Spring Boot setup that instruments Spring AI with OpenTelemetry and exports traces to a self-hosted Langfuse stack.

Verified evidence included Open

This solution implements hybrid retrieval by combining two independent PostgreSQL-based retrieval paths.

Verified evidence included Open

A runnable event-driven pipeline that enriches Kafka messages using LLM calls with idempotent processing, DLQ handling, and end-to-end tracing.

Verified evidence included Open

A production-grade LLM proxy that enforces per-tenant API keys, rate limits, token budgets, caching, and audit logging.

Verified evidence included Open

A runnable evaluation harness that tests prompts/RAG outputs against golden datasets, computes metrics, and generates CI-friendly reports and evidence packs.

Verified evidence included Open

A runnable workflow engine for LLM tool-calling with durable run state, retries, idempotency keys, and human-in-the-loop checkpoints.

Verified evidence included Open

A runnable recommendation service combining vector similarity with deterministic business rules and explainable ranking.

Verified evidence included Open

A runnable chat service that streams LLM tokens over SSE, supports cancellation and resume, and persists conversation state safely.

Verified evidence included Open

A runnable assistant that converts natural language questions into SQL with schema grounding, read-only enforcement, and full audit logging.

Verified evidence included Open

A runnable ingestion pipeline that extracts text, deduplicates, redacts PII, generates embeddings, and produces evidence artifacts for compliance and quality.

Verified evidence included Open