Session
"Smarter, Cheaper AI Agents: Semantic Caching in Production"
AI agents are expensive to scale. A single agentic workflow can involve dozens of LLM calls, and popular reasoning models make every token costly. The classical solution caching breaks down for natural language: no two users phrase the same question identically.
Semantic caching solves this by matching on meaning (embedded as vectors) instead of characters. But getting this right in production requires the right threshold, the right eviction strategy, the right accuracy techniques, and the right query routing.
This talk walks through the full engineering stack: how semantic caches work, how to measure them rigorously, four composable techniques to improve accuracy, how to embed caching inside agentic workflows at the sub-question level, and how Walmart's waLLMartCache achieved ~90% accuracy in production across a multi-tenant, globally scaled deployment.
Rama Krishna Raju Samantapudi
Sr. Staff AI/ML Architect at ServiceNow
Austin, Texas, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top