Session
Optimizing RAG with Semantic Caching & LLM Memory - {Redis}
In this session, learn how to improve cost, speed, and scalability of RAG systems with semantic caching and short+long term memory patterns. In this session, we will review the basics of semantic caching, some early academic research, and how to improve cache hit accuracy using fine tuned models. We will also briefly explore novel long term memory patterns for agentic workflows.

Tyler Hutcherson
Redis - Lead Applied AI Engineer
Richmond, Virginia, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top