Ivett Ördög
Engineering culture advocate, public speaker, creator of a gamified devops training tool "Lean Developer Experience"
Putzbrunn, Germany
Actions
Ivett Ördög is a public speaker and the creator of Lean Developer Experience (aka Lean Poker), a gamified DevOps training tool that teaches agile, lean and continuous deployment practices to developers. She is based in Bavaria, Germany and has over 25 years of professional experience in software development and 15 in leadership. She is passionate about innovation, collaboration and learning, and enjoys sharing her knowledge and insights with others. She is also the creator and host of the @NextIncrement YouTube channel.
Area of Expertise
Topics
Markdowns Are The New Tests
I never liked code reviews. Then AI agents turned me into a full-time reviewer, and as boredom and fatigue set in, it was only a matter of time before serious mistakes would slip through.
That changed when I stumbled upon a technique while building a refactoring tool. It turned hours-long test reviews into quick scans. Better yet: it merged human-readable documentation, requirements, and tests into a single format that guides AI agents to reliable solutions.
Join me in exploring Constrained Tests, Approved Fixtures, and Approved Logs—three patterns that completely changed how I communicate with AI agents.
I implemented Game of Life 100+ times! Let's explore the 3 most interesting takes...
Explore a mind bending feature of functional programming, how you can easily get started with GPU programming, and learn about under appreciated SQL features that you didn't know you need.
Having facilitated code retreats for years, I’ve witnessed a myriad of solutions to a single, seemingly straightforward problem: Conway’s Game of Life. While varying constraints often lead to different approaches, the most intriguing aspect lies in the valuable lessons these solutions impart, shaping our programming practices for years to come. Join me as we delve into three distinct implementations of this captivating problem. If you join me for this session, you'll have 3 new tools under your belt to write code that impress your colleagues.
How to sell a big refactor or rewrite to the business?
In the world of software development, dealing with legacy code is often a necessary evil, especially for successful, fast-growing companies. But how do we tackle this challenge smartly? This talk delves into the often-misunderstood realm of large-scale refactoring and rewrites, presenting a nuanced approach that contrasts with the traditional 'never rewrite' dogma.
We'll delve into real-world case studies where companies have successfully navigated their technical debt, uncovering crucial insights. Specifically, we will identify two key properties of these successful rewrites that can make or break your efforts. Understanding these properties enables us to strategically manage technical debt without losing our competitive edge. This session is not just a theoretical discussion but a practical guide, concluding with a systematic approach for your team's refactor or rewrite projects.
Microservices Result In A Fragile System (Unless You Do This)
Soon after introducing microservices, we were overwhelmed by constant outages and endless alerts, leaving us no time to figure out what was going wrong. In this talk, I’ll share how we rediscovered the Saga pattern amidst that chaos and how it transformed our fragile system into a resilient one. By exploring the failure modes and incomplete implementations we encountered, you’ll gain a deeper understanding of the pattern. You’ll learn about common pitfalls and a blueprint for building resilience into your inter-service communications. Pay attention, and you’ll never have to wake up to the sound of PagerDuty in the middle of the night.
Why Metrics Are Derailing Software Teams (And How to Fix This)
How can management decisions lead to a train derailment and the deaths of 107 people? In the case of the Amagasaki disaster—and countless failures in software teams—the root cause was the same: the flawed use of metrics. We set out to answer a simple question: What makes a good productivity metric? But that question led us down a rabbit hole. The real insight came when we realized we were asking the wrong question entirely.
This talk explores how well-meaning teams end up optimizing for the wrong outcomes, how common engineering metrics backfire, and what to focus on instead. You'll leave with a practical framework for using metrics to actually improve team performance—without driving your team off the rails.
You Can Train Your AI Agent
Do you ever work with these AI agents and find yourself repeating the same things over and over — only to see your virtual coding partner churn out yet another overly complex function with no tests? When it’s a clean slate, the AI can generate working solutions with a single prompt. But once it has to deal with the technical debt it created, it slows to a crawl — and becomes worse than useless.
Recently, I discovered a trick that made my clumsy AI coding agent so effective, it migrated a giant, horribly written legacy app from MongoDB to SQL in just a few hours — with almost no input from me.
It all comes down to one realization: the real gap between AI agents and humans isn’t creativity or logic — it’s the inability to form habits. So I built a system that lets me transfer my own habits into Claude Code, and the result is an agent that performs like a mid-level engineer who knows what good looks like.
In this talk, I’ll share the exact setup — including the prompt structure, quality checks, post-commit hooks, and a weirdly effective emoji trick. You’ll see how I apply this system live, and walk away with a blueprint you can use to train your AI coding agent.
AI agents are actually great for legacy code
Ever noticed how AI agents shine on small projects but fall apart in large codebases? At least by default, but I figured out a way to make them shine in these chaotic environments.
As a technical lead and coach, I’ve spent years helping engineers form habits that improve code quality while delivering features. So when I realized AI agents face similar struggles, I got curious: could we build workflows around them that mimic those habits?
That’s exactly what I’ve been working on. As a result, we finally tackled a project we’d postponed for years — and cut our operating costs by 70%. What took the AI five days would have taken us several months of engineering time.
I’ll walk you through the setup, the practices behind it, and how you can apply them to make AI actually useful in your own messy, real-world codebases.
Test Your Legacy Code With A Single Click
Working with legacy code is hard, especially when you lack the safety net of tests. But what if you could add tests with a single click?
I’ve spent my career wrestling with legacy systems, and I’ve learned that the biggest hurdles aren't logical — they're mechanical. Breaking dependencies and setting up test doubles are tedious, manual tasks that drain valuable time and energy.
Whenever I find myself doing repetitive work by hand, alarm bells go off. Why aren’t we automating this?
That question led me to build SpecRec: a system that automatically injects dependencies and records real interactions to generate golden master tests. It’s not about AI-generated tests—it’s about a deterministic, provably correct approach to testing.
By automating the most painful parts of testing legacy code, SpecRec helps you ship with confidence, reduce technical debt, and finally breathe easy.
In this talk I'll show you how SpecRec works, and you'll leave knowing how to automate testing of even the most challenging legacy codebases, freeing up your time to focus on innovation.
What if you could generate tests for legacy code with a single click
Writing tests for legacy code can be hard and time‑consuming. Even with Michael Feathers’ guidance, the task remains a challenge.
So I wondered whether modern language features and tooling could finally make it easy. The result surprised me and completely changed how I work with legacy systems.
SpecRec automatically generates characterisation tests with minimal code interventions that give you a robust safety net, letting you refactor large parts of a codebase in hours instead of weeks.
I’ll walk through the technique, show a real‑world use case, and explain how you can adopt SpecRec in your own projects.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top