© Mapbox, © OpenStreetMap

Most Active Speaker

Baruch Sadogursky

Baruch Sadogursky

Member of DevRel Staff, Tessl AI

Лид ДевРела

Nashville, Tennessee, United States

Actions

Baruch Sadogursky (@jbaruch) did Java before it had generics, DevOps before there was Docker, and DevRel before it had a name. He built DevRel at JFrog from a ten-person company through IPO, co-authored "Liquid Software" and "DevOps Tools for Java Developers," and is a Java Champion, Microsoft MVP, and CNCF Ambassador alumni.

Today, he's obsessed with how AI agents actually write code. At Tessl, an AI agent enablement platform, Baruch focuses on context engineering, management, and sharing. On top of sharing context with AI agents, Baruch also shares knowledge with developers through blog posts, meetups, and conferences like DevNexus, QCon, Kubecon, and Devoxx, mostly about why vibecoding doesn't scale.

Барух Садогурский (@jbaruch) писал на Джаве до того, как в ней появились дженерики, рассказывал про ДевОпс до того, как появился Докер, и занимался ДевРелом до того, как его стали так называть. Барух основал DevRel в JFrog когда там было 10 человек, и помог компании дойти до IPO с оценкой в $6B помогая инженерам лучше делать их работу. Теперь Барух продолжает помогать иженерам, а так же помогает компаниям помогать инженерам. Он соавтор книг "Liquid Software" и "DevOps Tools for Java Developers", является членом ПК нескольких престижных конференций и выступает регулярно на таких конференциях как Kubecon, JavaOne (мир праху его), Devoxx, QCon, DevRelCon, DevOpsDays (по всему миру), DevOops (не опечатка) и так далее.

Area of Expertise

  • Information & Communications Technology

Topics

  • DevOps & Automation
  • Continous Delivery
  • java
  • groovy
  • Software Development
  • Software Architecture
  • devops
  • continuous delivery
  • Continuous Integration
  • software engineering
  • gradle
  • Apache Maven
  • Developer Relations
  • Developer Advocacy
  • AI
  • Agentic AI
  • context engineering
  • Prompt-Driven Development
  • Spec-Driven Development
  • AI Developer Tools
  • AI Agentic Workflows
  • LangChain4j
  • AI Assisted Development

Back to the Future of Software: How to Survive the AI Apocalypse with Tests, Prompts, and Specs

Great Scott! The robots are coming for your job—and this time, they brought unit tests. Join Doc and Marty from the Software Future (Baruch and Leonid) as they race back in time to help you fight the machines using only your domain expertise, a well-structured prompt, and a pinch of Gherkin. This keynote is your survival guide for the AI age: how to close the intent-to-prompt chasm before it swallows your roadmap, how to weaponize the Intent Integrity Chain to steer AI output safely, and why the Art of the Possible is your most powerful resistance tool. Expect:
- Bad puns
- Good tests
- Wild demos

The machines may be fast. But with structure, constraint, and a little time travel, you’ll still be the one writing the future.

Engineering Context So AI Stops Guessing

"You're absolutely right," my AI coding agent said, while removing validation logic and confidently explaining why this is an improvement. It agrees when I'm right, agrees when I'm wrong, and will keep agreeing while your production database burns.

The model is fine — it's also working blind, deciding based on whatever you happened to shove into its context window. A cheap, small model with good context consistently outperforms the most expensive frontier model without it — and six months from now, the teams that got their context right will be the ones wondering what the fuss over model upgrades was about.

Three kinds of context artifacts actually work: skills (executable instructions), rules (constraints and conventions), and scripts (deterministic operations) — all versioned, testable, and shareable. We'll start from real failures caused by missing context and rebuild them live on stage: write an artifact, add an eval, watch it fail, iterate until it passes, then publish and install it on a fresh agent. The craft comes down to four things: expertise (encode what you actually know, not vague instructions), feedback loops (measure with evals, iterate), repeatability (version so behavior doesn't drift), and distribution (package so every agent on the team gets the same knowledge).

New talk for 2026 — built from patterns validated across 190+ conference deliveries. The four engineering principles come from the Arc of AI 2026 keynote. Includes live artifact creation and eval demo. Works as 45-min session, keynote, or 90-min hands-on workshop. Solo talk.

The Right 300 Tokens Beat 100k Noisy Ones: Four Context Antipatterns That Kill Your AI Agent

Your agent has 100k tokens of context. It still forgets what you told it two messages ago. Context engineering treats what your agent knows as an architecture decision — one you can design, test, and version.

This talk dissects four antipatterns: the Stuffed Prompt (cramming everything into the system prompt), the Wrong Tool for the Job (retrieval when rules suffice), the Goldfish Agent (no memory across sessions), and Vibes Eval (judging quality by gut feel). For each, we'll diagnose the failure, show the fix, and demonstrate the difference live with a coding agent.

You'll leave with four checks you can run on your own agent Monday morning, and a decision framework for context architecture that doesn't require a PhD in prompt engineering.

Debuted at QCon London 2026 to 99% green votes (highest-rated in track). Includes live coding agent demo. Works as 45-min session or 30-min condensed. Solo talk.

Skill Issue: How to Write Skills That Actually Work

You've written a dozen skills. Some work, some don't, and you have no way to tell which. The agent says "you're absolutely right" while invoking the wrong one, and you keep re-explaining the same things to it. Without a way to measure what a skill adds, there's no way to find out. Meanwhile, the team next door is writing the same ones you already wrote, because nobody can find yours.

If you get why skills matter but can't get yours to do what you want, this is for you. 301, no "what is a skill," straight to the craft.

Skills are context artifacts: prose for flexible guidance, scripts for deterministic work, rules for hard constraints. Let's make them better:

1. A context artifact library that grows and patches itself: new skills emerge from agent friction, existing ones get fixed when they drift.

2. Design evals that grade only what the context adds — scenarios that probe the contribution, rubrics that ignore everything else, guards against state bleed and prompt leakage.

3. Pair every change with a second-model reviewer that catches regressions before merge. Version the library so rollback costs a checkout, not a postmortem.

4. Treat skills like code: scan, sign, gate at install. But also, treat them like prompts: add scanners for the attack surface conventional tooling can't see, like injection in the skill body, indirect poisoning through whatever the skill ingests, and tool-abuse paths that didn't exist before the agent had browsing tools.

5. Build a context artifact supply chain: registry, discovery, telemetry, staged rollout, so the team next door finds your skill instead of writing it for the third time. The same registry that solves discovery solves compliance: one push updates every agent in the org. Measure what proves reuse is real: installation and activation rates, because a skill nobody finds is a skill nobody uses.

Every practice above is itself a skill. "This is how we write context artifacts around here" belongs in the library: versioned, installed on every agent, graded by its own eval. The meta-skill has to earn its tokens too.

New talk for 2026. Scales across formats: 45-min session and 90-min hands-on stay pure 301 — attendees should already know what a skill is; 2- and 3-hour workshops expand the scope to 101→301, opening with the fundamentals (what skills are, how triggers work, installing and invoking them) before the 301 material.

Code-heavy, low-philosophy — targeted at engineers already using agents daily who want to stop repeating themselves. Solo talk.

Coding Fast and Slow: Applying Kahneman's Insights to Improve Development Practices and Efficiency

Your brain has two systems. System 1 is fast, intuitive, and wrong more often than you'd think. System 2 does the slow, deliberate work: debugging, architecture, catching what System 1 missed. Every context switch burns System 2 fuel, and every distraction drains the tank further, which is why developers write code for only 52 productive minutes per day.

AI agents make this worse: they generate at System 1 speed and System 1 quality while demanding System 2 oversight for every line. Every alt-tab between your agent's output and your own reasoning is a context destruction event.

The human problem and the AI problem share a root cause, and context engineering fixes both.

- Personal: flow protection, cognitive load, and why a 20-minute nap is backed by more science than your daily standup meeting.
- Organizational: why back-to-office mandates are context destruction policies.
- Technological: why your AI agent spent 4 hours on a build failure caused by an AWS outage, and how domain knowledge plugins took it from 20% to 95% success.

12 deliveries over 2.5 years — my most battle-tested talk. Includes Devoxx Poland, DevNexus, GeeCON, BaselOne, Dev2Next, KCDC, UberConf, JConf.Dev, Shift, and Voxxed Days Amsterdam 2026. Consistently high ratings. Interactive format — audience participates in cognitive science experiments (Stroop test, bat-and-ball riddle, attention puzzles). The 2026 version adds a substantial context engineering module that ties behavioral science to AI tooling. Works as 45-min session or keynote. Solo talk.

Technical Enshittification: Software Decay as Context Collapse

Sonos bricked 80% of functionality while the team knew it would fail ahead of time. CrowdStrike blue-screened 8.5 million machines. Southwest survived because their system still ran Windows 3.1.

Root cause: context degradation.

* Technical debt — context that walked out the door when the original authors left
* Bloat — context nobody tracks, so nothing gets removed
* Wrong features — context somebody guessed at instead of validating
* AI-generated code — 42% of commits now come from models with zero understanding of why existing code exists, accelerating every one of the above

One lens explains software decay across all four dimensions, and context engineering is the systematic fix. We'll dissect failures to show the pattern, then build solutions at four levels: personal (flow protection, cognitive load), AI tooling (why your agent debugged for 4 hours when the real problem was an AWS outage — and how domain knowledge plugins fix it), organizational (why back-to-office mandates are context destruction), and engineering culture (feedback loops that preserve context).

6 deliveries including Javaland 2025, J-Fall 2025, and Voxxed Ticino 2026. Consistently one of the highest-audience-engagement talks I deliver — the failure case studies (Sonos whistleblower, CrowdStrike, Southwest Windows 3.1) generate strong audience reactions. Works as 45-min session or keynote. Solo talk. Note: "enshittification" is Cory Doctorow's coinage (2023 Macquarie Dictionary Word of the Year, 2024 American Dialect Society Word of the Year) — an established term in tech discourse, not gratuitous profanity. Accepted at Devoxx, Voxxed, J-Fall, Javaland, and JCON without issue.

RoboCoders: Judgment Day: AI Agents Face Off

Two speakers, multiple AI coding agents, real IoT hardware, and a bet: the context you give your agent matters more than which agent you pick.

Live on stage, starting from an empty directory: control IoT devices, build a face recognition pipeline, and drive physical hardware as a real-time visual feedback system with real devices reacting to code written by AI in real time.

The easy parts work fine, but then the hardware and the API disagree about something the documentation doesn't mention, both agents produce code that runs, passes every check, and is completely wrong. We change what the agent knows (using spec-driven development, intent integrity chains, and structured context engineering) and the audience sees exactly what that fixes, what it breaks next, and how the every next failure is worse than the first.

7 deliveries including Devoxx Belgium, Devoxx Poland, JFokus 2026, and BaselOne. Co-presented with Viktor Gamov. Physical IoT hardware on stage — smart bulbs, camera feed, LED light bars respond to face recognition in real time. Audience sees the agents succeed and fail with real devices, not slides. Scales from 45-min (core IoT demo) through 75-min (full stage progression) to 2-3 hours (additional rounds, spec-driven development with iikit/OpenSpec/Kiro, methodology deep dives). Strong closing keynote candidate — the visual spectacle of hardware responding to code written live is a crowd favorite.

Codepocalypse Now: LangChain4j vs Spring AI

Can Java build a real AI agent — one that manages your calendar, reads your email, orders pizza, and remembers who you are across sessions? OpenClaw, the personal AI agent with 350K GitHub stars, proves the concept. We're going to build it twice, in Java, live on stage.

Baruch brings Spring AI + Spring Boot, Viktor brings LangChain4j + Quarkus (or vice-versa). Same features, same LLM, completely different philosophies. We'll run six competitive rounds of coding, from basic agent setup through memory, tool calling, agentic workflows, guardrails, and observability. Each round surfaces a design disagreement: should memory be an Advisor or a Provider? Are agents composed services or first-class citizens? And when your guardrail framework and the model disagree, who wins?

The frameworks disagree on how AI agents should be built. The audience votes on who's right.

5 deliveries including Devoxx France, Dev2Next, and Arc of AI 2026. Co-presented with Viktor Gamov. Live competitive coding format — two speakers build the same app simultaneously with different frameworks, audience votes on winner. High-energy, high-engagement format. Best in long formats (2-3 hours), can be done as short as 45-min slot. Java/AI track.

Never Trust a Monkey: From AI Slop to Code You Can Ship

AI writes 42% of committed code, and 96% of developers don't trust it. We gave infinite monkeys GPUs — sometimes they produce Shakespeare, sometimes `assertEquals(true, true)`.

The Intent-to-Code Chasm is the central challenge of 2026. Every previous abstraction from compilers to VMs to cloud was deterministic; AI is stochastic, and our safeguards (reviews, tests, professionalism) still assume a human wrote the code. At 25,000 lines overnight, those assumptions collapse.

Three-part framework: Chasm (why the intent gap is structurally new), Context (how structured knowledge bridges it), Chain (a verifiable flow from intent → spec → locked tests → code, where no monkey validates its own work). Includes a real eval journey from 15% to 99% where better context beat a better model.

5 deliveries including JFokus 2026 (top 2 highest-rated talk) and Voxxed Days Amsterdam 2026 keynote. Scales from 15-min keynote to 45-min deep dive — the compressed keynote version was the highest-rated keynote at Voxxed Amsterdam. Strong keynote candidate. Solo talk.

JCON EUROPE 2026 Sessionize Event

April 2026 Köln, Germany

Arc of AI 2026

April 2026 Austin, Texas, United States

Devnexus 2026 Sessionize Event

March 2026 Atlanta, Georgia, United States

Voxxed Days Tichino 2026

Technical Enshittification: Why Everything in IT is Horrible Right Now and How to Fix It

February 2026 Lugano, Switzerland

Jfokus 2026 Sessionize Event

February 2026 Stockholm, Sweden

Open Conf - 2025 / PANELS and OPEN SPACE ACTIVITIES Sessionize Event

November 2025 Athens, Greece

AI by the Bay Sessionize Event

November 2025 Oakland, California, United States

DevFest Toulouse 2025 Sessionize Event

November 2025 Toulouse, France

BaselOne 2025 Sessionize Event

October 2025 Basel, Switzerland

Devoxx Poland 2025

June 2025 Kraków, Poland

AI fokus Sessionize Event

June 2025 Stockholm, Sweden

J-Spring 2025 Sessionize Event

June 2025 Utrecht, The Netherlands

Spring I/O 2025 Sessionize Event

May 2025 Barcelona, Spain

GeeCON 2025

May 2025 Kraków, Poland

JCON EUROPE 2025 Sessionize Event

May 2025 Köln, Germany

Devoxx France 2025

April 2025 Paris, France

JavaLand 2025

April 2025 Adenau, Germany

IJ Internal Conference 2025 Sessionize Event

March 2025 Antalya, Turkey

Devnexus 2025 Sessionize Event

March 2025 Atlanta, Georgia, United States

Voxxed Days Tichino 2025

January 2025 Lugano, Switzerland

DevOps Vision and MLOps Vision 2024 Sessionize Event

December 2024 Clearwater, Florida, United States

DevOpsDays Tel Aviv 2023 Sessionize Event

October 2024 Tel Aviv, Israel

BaselOne 2024 Sessionize Event

October 2024 Basel, Switzerland

JConf.dev 2024 Sessionize Event

September 2024 Plano, Texas, United States

swampUP 2024 Sessionize Event

September 2024 Austin, Texas, United States

KCDC 2024 Sessionize Event

June 2024 Kansas City, Missouri, United States

JCON EUROPE 2024 Sessionize Event

May 2024 Köln, Germany

Devnexus 2024 Sessionize Event

April 2024 Atlanta, Georgia, United States

DeveloperWeek 2024 Sessionize Event

February 2024 Oakland, California, United States

DevRel Experience 2023 Sessionize Event

December 2023 Clearwater, Florida, United States

DevOps Vision 2023 Sessionize Event

December 2023 Clearwater, Florida, United States

Oπe\n Conf - 2023 Sessionize Event

November 2023 Athens, Greece

BaselOne 2023 Sessionize Event

October 2023 Basel, Switzerland

DevOps Days Buffalo 2023 Sessionize Event

September 2023 Buffalo, New York, United States

Infobip Shift 2023 Sessionize Event

September 2023 Zadar, Croatia

UberConf 2023

July 2023 Denver, Colorado, United States

DevOpsDays Birmingham (UK) 2023 Sessionize Event

June 2023 Birmingham, United Kingdom

DevOps Days Phoenix 2023

May 2023 Mesa, Arizona, United States

swampUP 2022 City Tour - Munich Sessionize Event

October 2022 Munich, Germany

swampUP 2022 City Tour - London Sessionize Event

October 2022 London, United Kingdom

swampUP 2022 City Tour - New York City Sessionize Event

October 2022 New York City, New York, United States

Yalla DevOps Tel Aviv 2022 Sessionize Event

July 2022 Tel Aviv, Israel

JNation 2022 Sessionize Event

June 2022 Coimbra, Portugal

swampUP 2022 Sessionize Event

May 2022 Carlsbad, California, United States

DevOpsDays Austin 2022 Sessionize Event

May 2022 Austin, Texas, United States

Cloud Native Kitchen Sessionize Event

December 2020

DeveloperWeek New York 2020 Sessionize Event

December 2020 Brooklyn, New York, United States

EuropeClouds Summit Sessionize Event

October 2020

DeveloperWeek Global 2020 Sessionize Event

June 2020

Azure Day Rome 2020 Sessionize Event

June 2020

All The Talks Sessionize Event

April 2020

DevOps Summit Amsterdam 2019 - Two days DevOps experience Sessionize Event

October 2019 Amsterdam, The Netherlands

Yalla! DevOps 2019 Sessionize Event

September 2019 Herzliya, Israel

swampUP 2018 Sessionize Event

May 2018 Napa, California, United States

Baruch Sadogursky

Member of DevRel Staff, Tessl AI

Nashville, Tennessee, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top