Session
My tokens are limited. You must use the right context: An agent builder's guide
AI models can only process so much information at once. This hard constraint—the context window—limits their ability to make smart, context-aware decisions. The solution isn't waiting for bigger models. It's becoming deliberate about what you feed them.
This talk explores how to give AI agents precise, useful context by treating token budgets like any other scarce resource. As a wise person might say, "An AI only knows what it's shown."
Priority-Based Context Management: Not all context is equal. MoSCoW prioritization (Must/Should/Could/Won't) ensures critical information survives while noise gets discarded. Temporal relevancy scoring means recent context outweighs stale data.
Smarter Retrieval: Moving beyond basic RAG—exploring open source vector databases, hybrid search strategies, and knowing when similarity alone isn't enough.
Stateful Agents: Building systems with frameworks like LangChain that adjust context on the fly, remember what matters across sessions, and adapt to preferences over time. Two-tier architectures using tools like Redis and PostgreSQL let agents recall history without burning tokens.
You'll leave with practical patterns for building AI agents that work with sharp focus—using only the data that matters.
Ben Gamble
Technology Sommelier, AI Whisperer
Cambridge, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top