Session
Never Trust a Monkey: From AI Slop to Code You Can Ship
We're in the middle of another leap in abstraction.
Like compilers, cloud, and containers before it, AI coding agents arrived with hype, fear, and broken assumptions. We gave the monkeys GPUs. Sometimes they
output Shakespeare. Other times, they confidently ship code that compiles, passes tests, and still does the wrong thing.
The problem is the gap between what we mean and what actually runs.
This talk delivers a practical framework for working with AI agents, built on three ideas: the Chasm between human intent and the code that actually runs,
the Context that replaces guessing with grounding (APIs, conventions, constraints, domain rules), and the Chain that keeps intent alive through a structured
flow from prompt to spec to test to code, where every step produces a verifiable artifact validated externally.
The framework comes from real failure patterns: systems that passed every test, shipped successfully, and still failed to meet intent. Through interactive
demonstrations and honest war stories, Baruch will trace how intent gets lost and build the guardrails that prevent it.
You'll leave with a working model for AI-assisted development where humans own the meaning and machines do the typing.
Trust your context. Trust your guardrails. Never trust a monkey.
This is a "keynote style" thought leadership piece about establishing trust in AI-generated code. It's one of my favorite talks and a heavy-hitter, scoring very high at any conference it was presented, e.g. top #2 overall at JFokus 2026.
Baruch Sadogursky
Member of DevRel Staff, Tessl AI
Nashville, Tennessee, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top