Baruch Sadogursky
Member of DevRel Staff, Tessl AI
Лид ДевРела
Nashville, Tennessee, United States
Actions
Baruch Sadogursky (@jbaruch) did Java before it had generics, DevOps before there was Docker, and DevRel before it had a name. He built DevRel at JFrog from a ten-person company through IPO, co-authored "Liquid Software" and "DevOps Tools for Java Developers," and is a Java Champion, Microsoft MVP, and CNCF Ambassador alumni.
Today, he's obsessed with how AI agents actually write code. At Tessl, an AI agent enablement platform, Baruch focuses on context engineering, management, and sharing. On top of sharing context with AI agents, Baruch also shares knowledge with developers through blog posts, meetups, and conferences like DevNexus, QCon, Kubecon, and Devoxx, mostly about why vibecoding doesn't scale.
Барух Садогурский (@jbaruch) писал на Джаве до того, как в ней появились дженерики, рассказывал про ДевОпс до того, как появился Докер, и занимался ДевРелом до того, как его стали так называть. Барух основал DevRel в JFrog когда там было 10 человек, и помог компании дойти до IPO с оценкой в $6B помогая инженерам лучше делать их работу. Теперь Барух продолжает помогать иженерам, а так же помогает компаниям помогать инженерам. Он соавтор книг "Liquid Software" и "DevOps Tools for Java Developers", является членом ПК нескольких престижных конференций и выступает регулярно на таких конференциях как Kubecon, JavaOne (мир праху его), Devoxx, QCon, DevRelCon, DevOpsDays (по всему миру), DevOops (не опечатка) и так далее.
Area of Expertise
Topics
Coding Fast and Slow: Applying Kahneman's Insights to Improve Development Practices and Efficiency
How does behavioral psychology connect to coding? This talk explores how understanding and managing your mental energy can transform the way you work. Using accessible research, including Daniel Kahneman’s concepts of “fast” and “slow” thinking, we’ll dive into how different types of thinking impact decision-making and productivity. We’ll also discuss how to conserve mental fuel, so you have the focus and clarity needed for critical tasks—even at the end of a demanding day.
In addition to understanding how our minds work, we’ll talk about practical techniques for managing time and allocating mental resources effectively. This includes strategies to reduce context switching, avoid wasting mental energy on low-priority tasks, and stay focused on what really matters. By using your mental energy wisely, you’ll be able to maintain productivity and avoid burnout.
If you’re interested in learning how to apply behavioral psychology to your workflow, improve time management, and make smarter decisions with less effort, this talk is for you.
Back to the Future of Software: How to Survive the AI Apocalypse with Tests, Prompts, and Specs
Great Scott! The robots are coming for your job—and this time, they brought unit tests. Join Doc and Marty from the Software Future (Baruch and Leonid) as they race back in time to help you fight the machines using only your domain expertise, a well-structured prompt, and a pinch of Gherkin. This keynote is your survival guide for the AI age: how to close the intent-to-prompt chasm before it swallows your roadmap, how to weaponize the Intent Integrity Chain to steer AI output safely, and why the Art of the Possible is your most powerful resistance tool. Expect:
- Bad puns
- Good tests
- Wild demos
The machines may be fast. But with structure, constraint, and a little time travel, you’ll still be the one writing the future.
Codepocalypse Now: LangChain4j vs. Spring AI
Which Java framework handles AI better: LangChain4j or Spring AI? In this live coding showdown, we’ll build a semantic code search application from scratch, putting both frameworks to the test. We’ll cover project setup, using language models, setting up a retrieval-augmented generation workflow, and creating a REST API.
You’ll see how these frameworks handle embedding generation, vector database integration, and real-world development challenges. By the end, the audience decides who wins, based on which framework gets the job done faster, better, and with less hassle (and whether the demo actually works).
It’s a live experiment under pressure! Come and see which one comes out on top!
This is a fun hands–on talk – two speakers, two laptops, two frameworks, one demo. Who will do better? Whose demo will even work? Who will have a better developer experience? The audience will be the judge.
Never Trust a Monkey: From AI Slop to Code You Can Ship
We're in the middle of another leap in abstraction.
Like compilers, cloud, and containers before it, AI coding agents arrived with hype, fear, and broken assumptions. We gave the monkeys GPUs. Sometimes they
output Shakespeare. Other times, they confidently ship code that compiles, passes tests, and still does the wrong thing.
The problem is the gap between what we mean and what actually runs.
This talk delivers a practical framework for working with AI agents, built on three ideas: the Chasm between human intent and the code that actually runs,
the Context that replaces guessing with grounding (APIs, conventions, constraints, domain rules), and the Chain that keeps intent alive through a structured
flow from prompt to spec to test to code, where every step produces a verifiable artifact validated externally.
The framework comes from real failure patterns: systems that passed every test, shipped successfully, and still failed to meet intent. Through interactive
demonstrations and honest war stories, Baruch will trace how intent gets lost and build the guardrails that prevent it.
You'll leave with a working model for AI-assisted development where humans own the meaning and machines do the typing.
Trust your context. Trust your guardrails. Never trust a monkey.
This is a "keynote style" thought leadership piece about establishing trust in AI-generated code. It's one of my favorite talks and a heavy-hitter, scoring very high at any conference it was presented, e.g. top #2 overall at JFokus 2026.
Technical Enshittification: Why Everything in IT is Horrible Right Now and How to Fix It
Software is a mess. Bloated, sluggish, broken by its own updates. Even basic apps demand absurd computing power. And innovation? If you count shuffling UI elements or slapping a ChatGPT button onto everything, sure.
The common explanation is that engineers got lazy or companies got greedy. The real explanation is simpler: we keep destroying context. Technical debt is lost context. Bloat is unknown context. Wrong features are guessed context. Reorgs are destroyed context. Every horror story in IT, from Sonos shipping a 1.2-star app to Southwest Airlines running on Windows 3.1, traces back to the same failure: context that was never made explicit, got buried, or was actively destroyed.
This talk walks through the wreckage with real cases, leaked whistleblower reports, and research data, then presents practical fixes organized by what you can actually control: your own workflows, your AI tools, your team's practices, and your organization's incentives. Your project doesn't have to follow the Googles and Metas down the enshittification curve.
Let's figure out how to build good software again.
This talk is an entertaining rant with a serious thesis underneath. The first half is real failure cases (Sonos whistleblower data, CrowdStrike/Southwest, LastPass breach timeline) that get the audience laughing and nodding. The second half reframes every example through "context degradation" as a unifying theory and delivers concrete fixes. Profanity is part of the register. Five deliveries so far including JavaLand, JCON, and Voxxed Days; this is the mature version.
RoboCoders: Judgment Day: AI-Assisted Engineering Applied - The Battle of Agents
Agentic AI-assisted engineering tools promise cleaner code, faster development, and fewer late-night debugging sessions. But do they truly deliver?
In this live showdown, Viktor and Baruch will each use a different set of cutting-edge AI coding tools, like an IDEs and CLI agents (we'd name them, but honestly, things move too fast in this space), to develop a non-trivial IoT application, from initial setup to testing and debugging, all live on stage.
Will the IoT bulb turn on by the end of the session, and which tool will make it happen? You don't know, we don't know, but we'll find out together—live on stage.
You, the audience, decide which tool actually improves quality and productivity and which just adds noise instead of useful code. Bring your skepticism, cast your vote, and get ready for surprises.
This is a fun hands–on talk – two speakers, two laptops, two IDEs, one demo. Who will do better? Whose demo will even work? Who will have a better developer experience? The audience will be the judge.
"You're Absolutely Right" and Other Lies My AI Told Me: Engineering Context So AI Stops Guessing
My AI coding agent agrees with me a lot. It agrees when I'm right. It agrees when I'm wrong. It agrees while deciding on its own to remove validation logic, rewrite business rules, and confidently explain why this is an improvement.
This is what coding with AI agents looks like when context lives in the developer's head rather than in the system.
The fix is treating context as an engineering problem. Prompts, rules, and domain knowledge can be packaged into explicit, reusable units and shared across tools and teams through registries and repositories, the same way we already share libraries. When an agent operates within shared context instead of ad hoc conversation, it stops agreeing and starts doing the right thing.
On top of that foundation: when deterministic scripts are the better choice and how to embed them in agent workflows, how to test those reusable context units with something better than 'trust me, bro' and how guardrails like tests and structure help agents fail loudly instead of silently drifting.
If your AI keeps agreeing with you even when it's doing the wrong thing, you're absolutely right: this talk is for you.
Most AI talks focus on what agents can do. This one focuses on what they should be allowed to do, and how developers enforce that through shared context, structure, and guardrails. It's an engineering talk, not a model talk.
The Right 300 Tokens Beat 100k Noisy Ones: Four Context Antipatterns That Kill Your AI Agent
Your agent has 100k tokens of context. It still forgets what you told it two messages ago. Prompt "engineering" taught us to craft the perfect instruction. Context engineering treats everything your agent knows as an engineering problem: what it sees, how it retrieves it, what it remembers, and how you prove any of it works.
This talk dissects four antipatterns killing your AI agents and the architectural fixes that actually work:
* The Stuffed Prompt: You crammed everything upfront and hoped for the best. Static context doesn't scale. Dynamic loading and context refinement, fetching what's needed when it's needed, keeps you within your context window without losing signal. And yes, position matters: models do lose track of what's buried in the middle.
* The Wrong Tool for the Job: You picked one retrieval method and used it everywhere. RAG isn't always the answer. Neither are tools. Neither is an exact match. When do embeddings help, when does MCP give you precision, and when does a simple lookup beat both?
* The Goldfish Agent: Your AI agent forgets everything between sessions. Or worse, remembers everything forever. Short-term and long-term memory, pruning and compaction strategies: what to persist, what to summarize, where to store it, and when to let go.
* The Vibes Eval: You shipped because it "felt right." You can't improve what you don't measure. Eval strategies that prove your context choices work — or expose the tokens you're wasting.
Your context window called. It wants its tokens back.
Bonus: Baruch uses a coding agent to demonstrate these patterns live, so you'll see how they work under the hood — but everything applies to AI agents in general.
This is a live-demo talk. Every antipattern is demonstrated with a real coding agent failing, then fixed on stage. The audience sees the same model go from broken to working by changing context architecture alone. Originally co-presented with Patrick Debois at QCon London 2026; this version is solo.
JCON EUROPE 2026 Sessionize Event Upcoming
Arc of AI 2026 Upcoming
Devnexus 2026 Sessionize Event
Voxxed Days Tichino 2026
Technical Enshittification: Why Everything in IT is Horrible Right Now and How to Fix It
Jfokus 2026 Sessionize Event
Open Conf - 2025 / PANELS and OPEN SPACE ACTIVITIES Sessionize Event
AI by the Bay Sessionize Event
DevFest Toulouse 2025 Sessionize Event
BaselOne 2025 Sessionize Event
AI fokus Sessionize Event
J-Spring 2025 Sessionize Event
Spring I/O 2025 Sessionize Event
JCON EUROPE 2025 Sessionize Event
IJ Internal Conference 2025 Sessionize Event
Devnexus 2025 Sessionize Event
DevOps Vision and MLOps Vision 2024 Sessionize Event
DevOpsDays Tel Aviv 2023 Sessionize Event
BaselOne 2024 Sessionize Event
JConf.dev 2024 Sessionize Event
swampUP 2024 Sessionize Event
KCDC 2024 Sessionize Event
JCON EUROPE 2024 Sessionize Event
Devnexus 2024 Sessionize Event
DeveloperWeek 2024 Sessionize Event
DevRel Experience 2023 Sessionize Event
DevOps Vision 2023 Sessionize Event
Oπe\n Conf - 2023 Sessionize Event
BaselOne 2023 Sessionize Event
DevOps Days Buffalo 2023 Sessionize Event
Infobip Shift 2023 Sessionize Event
DevOpsDays Birmingham (UK) 2023 Sessionize Event
swampUP 2022 City Tour - Munich Sessionize Event
swampUP 2022 City Tour - London Sessionize Event
swampUP 2022 City Tour - New York City Sessionize Event
Yalla DevOps Tel Aviv 2022 Sessionize Event
JNation 2022 Sessionize Event
swampUP 2022 Sessionize Event
DevOpsDays Austin 2022 Sessionize Event
Cloud Native Kitchen Sessionize Event
DeveloperWeek New York 2020 Sessionize Event
EuropeClouds Summit Sessionize Event
DeveloperWeek Global 2020 Sessionize Event
Azure Day Rome 2020 Sessionize Event
All The Talks Sessionize Event
DevOps Summit Amsterdam 2019 - Two days DevOps experience Sessionize Event
Yalla! DevOps 2019 Sessionize Event
swampUP 2018 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top