Session
Context Engineering - Stop Agents from Choking on Their Own Data
AI agents choke when tools return large outputs logs, database results, file contents. The data fills the context window, older messages get pushed out, and the agent silently produces incomplete results. Worse: every subsequent operation reprocesses the same data, wasting tokens.
Memory pointers solve both problems: tools store large data outside context and return short references. The LLM orchestrates analysis without ever seeing the raw data.
Join me as I demo this pattern in single-agent and multi-agent configurations with production-ready code.
You'll leave with working Python code and production patterns you can deploy immediately.
Elizabeth Fuentes Leone
Developer Advocate
San Francisco, California, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top