Session
When Your Metrics Lie: Observability for Nondeterministic AI Systems
Your dashboard shows 99.9% uptime, 200ms latency, zero errors. Your users see confident nonsense. When LLMs become part of your stack, traditional observability breaks down - a successful HTTP response tells you nothing about whether the answer was actually useful.
Problem: AI systems introduce nondeterminism that breaks assumptions baked into our monitoring tools. The same request can produce wildly different outputs. An LLM confidently returning wrong information looks identical to success in your APM. Debugging requires full prompt/completion context that traditional trace storage wasn't designed for.
Solution: I'll share a production observability architecture that answers the question APM tools can't: "Was that response actually good?"
Doneyli De Jesus
Principal Architect @ ClickHouse
Montréal, Canada
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top