Session

Engineering Context So AI Stops Guessing

"You're absolutely right," my AI coding agent said, while removing validation logic and confidently explaining why this is an improvement. It agrees when I'm right, agrees when I'm wrong, and will keep agreeing while your production database burns.

The model is fine — it's also working blind, deciding based on whatever you happened to shove into its context window. A cheap, small model with good context consistently outperforms the most expensive frontier model without it — and six months from now, the teams that got their context right will be the ones wondering what the fuss over model upgrades was about.

Three kinds of context artifacts actually work: skills (executable instructions), rules (constraints and conventions), and scripts (deterministic operations) — all versioned, testable, and shareable. We'll start from real failures caused by missing context and rebuild them live on stage: write an artifact, add an eval, watch it fail, iterate until it passes, then publish and install it on a fresh agent. The craft comes down to four things: expertise (encode what you actually know, not vague instructions), feedback loops (measure with evals, iterate), repeatability (version so behavior doesn't drift), and distribution (package so every agent on the team gets the same knowledge).

New talk for 2026 — built from patterns validated across 190+ conference deliveries. The four engineering principles come from the Arc of AI 2026 keynote. Includes live artifact creation and eval demo. Works as 45-min session, keynote, or 90-min hands-on workshop. Solo talk.

Baruch Sadogursky

Member of DevRel Staff, Tessl AI

Nashville, Tennessee, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top