Session

The Right 300 Tokens Beat 100k Noisy Ones: Four Context Antipatterns That Kill Your AI Agent

Your agent has 100k tokens of context. It still forgets what you told it two messages ago. Context engineering treats what your agent knows as an architecture decision — one you can design, test, and version.

This talk dissects four antipatterns: the Stuffed Prompt (cramming everything into the system prompt), the Wrong Tool for the Job (retrieval when rules suffice), the Goldfish Agent (no memory across sessions), and Vibes Eval (judging quality by gut feel). For each, we'll diagnose the failure, show the fix, and demonstrate the difference live with a coding agent.

You'll leave with four checks you can run on your own agent Monday morning, and a decision framework for context architecture that doesn't require a PhD in prompt engineering.


Debuted at QCon London 2026 to 99% green votes (highest-rated in track). Includes live coding agent demo. Works as 45-min session or 30-min condensed. Solo talk.

Baruch Sadogursky

Member of DevRel Staff, Tessl AI

Nashville, Tennessee, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top