Session

Save AI Agent cost with Semantic Prompt Caching

AI Agents are token hungry and scaling them can be expensive, very expensive.

In this session, we will talk about how AI Agents can be made less token intensive with Prompt Caching.

We will then go one level up and introduce Semantic Prompt Caching.
- How does it work in code?
- How to evaluate it?
- What are its challenges?
- Real world example

All the concepts will be explained in simple language without any Jargon

Nikhilesh Tayal

Google Developer Expert for AI. Founder - "AI ML etc." (an educational platform for Senior IT professionals to learn AI). 70+ speaking assignments.

Udaipur, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top