David Burns
Head of Developer Advocacy and Open Source
Bournemouth, United Kingdom
Actions
David is the Chair of the W3C Browser Testing and Tools Working group and co-editor of the WebDriver specification, trying to ensure automation frameworks in browsers are interoperable. He was an engineering manager at Mozilla within Developer Experience working on tooling and infrastructure to help make a better web and now heads up the Developer Relations and Open Source Program Office at BrowserStack.
Links
Area of Expertise
Topics
Debugging the Undebuggable: A Deep Dive into Test Failures and Root Cause Analysis
We've all been there—a test fails randomly, passes locally but fails in CI, and leaves no clear trace of what went wrong. This talk will explore advanced debugging techniques to identify and fix these elusive test failures. We’ll look at how to use browser logs, network captures, and test execution traces to uncover the true cause of test flakiness. I'll share real-world debugging scenarios from Selenium tests and how to apply structured root cause analysis to stabilize your test suite.
Flakiness in your tests isn't down to test framework
All too often we get frustrated that we have flakiness in our tests. We have been sold the lie that frameworks have the best auto-waiting until it fails us... and it's not anyone's fault.
Vibe Coding & Vibe Testing: How AI-Powered Shortcuts Create New Problems
n the age of AI-assisted development, tools like GitHub Copilot and ChatGPT promise unprecedented speed. We can generate code in moments, prototype faster, and ship features at a breakneck pace. But this rapid-fire approach often leads to vibe coding, writing code without a foundational understanding of the underlying language, framework, or browser.
This talk explores how this over-reliance on AI is creating a new class of problems. When developers don’t truly grasp how their code works, they can’t anticipate edge cases, performance issues, or security vulnerabilities. This often leads to vibe testing, a reactive and superficial form of testing where the goal is simply to "make it work" rather than to ensure robustness.
I will discuss real-world examples where AI-generated code has created unintended consequences, showing how these shortcuts can lead to hard-to-diagnose bugs and technical debt. I'll also offer practical strategies for both developers and testers to strike a balance: using AI as a powerful learning and productivity tool, while still prioritizing the fundamental skills that are essential for building high-quality, maintainable software. This session is for anyone who wants to move beyond the AI hype and build a sustainable, quality-focused development practice.
The Empathy-Driven Test Plan: How to Uncover User Pain Points, Not Just Bugs
Many teams treat Quality Assurance as a checklist: Does the feature work? Are there any functional bugs? But often, a technically flawless product still delivers a frustrating user experience—full of friction, confusion, and subtle design flaws that drive customers away. These are pain points, and they are the bugs QA is missing.
This session introduces the Empathy-Driven Test Plan (EDTP), a practical framework for integrating user-centric design principles directly into the QA process. We will move beyond writing tests based on requirements documents and instead show you how to design tests around user personas, empathy maps, and user journey mapping. You’ll learn how to proactively identify and prioritize the usability, clarity, and emotional impact of your software, ensuring your testing efforts focus on uncovering critical friction points, not just checking boxes, and using AI as a tool, not as a replacement.
Stop Guessing, Start Seeing: The Rise of Observability-Driven Testing
In 2026, our systems are too complex for our test suites to keep up. We spend weeks writing manual scripts for "known-unknowns," while the real disasters happen in the "unknown-unknowns" of production. If you are still relying on hard-coded assertions and mock data to tell you if your system is healthy, you aren't testing—you’re just hoping.
Traditional testing treats the "Lab" and "Production" as two different worlds. We shift-left to find bugs early, but we ignore the mountain of high-fidelity data sitting in our telemetry pipelines. As we move toward non-deterministic AI agents and micro-frontend architectures, the "Expected Result" is no longer a static value; it’s a living pattern of behavior.
Enter Observability-Driven Testing (ODT). This isn't just "testing in production"; it’s the evolution of the SDLC where production telemetry is the test oracle. In this session, we will explore how to use OpenTelemetry (OTel) and trace-based testing to bridge the gap. We will discuss how to transform production "traces" into automated integration tests, using real-user journeys to validate system correctness in real-time.
You will walk away with a blueprint for moving past "Is the button green?" to "Is the system behaving as the users taught it to?" We’ll cover how to implement ODT without breaking your budget or your SLAs.
From Assertions to SLOs: Learn why hard-coded assertEquals() is failing in 2026 and how to replace it with SLO-based assertions that account for latency, error rates, and system drift.
Trace-Based Testing (TBT): A deep dive into using Distributed Tracing as a test runner. Learn how to trigger a test and validate every microservice "hop" in the chain without writing a single mock.
The "Mirroring" Pattern: How to safely "shadow" production traffic to your staging environment to auto-generate regression suites based on actual user behavior.
Operationalizing the Feedback Loop: Practical steps to integrate tools like Grafana, Honeycomb, or OpenTelemetry directly into your CI/CD gate, turning "Mean Time to Discovery" into a metric that your QA team actually owns.
The "Spicy" Truth: Why the role of the "Manual Tester" is evolving into the "Quality Observability Engineer"—and the three skills you need to survive this transition.
The Value Collapse of the Executioner: Moving from 'Test Execution' to "Test Coordinator"
For two decades, the software testing career path was simple: start as a manual executor, move to automated execution, and eventually manage the execution of others. But in 2026, the "Executioner" is facing a value collapse. If your primary value is running a test, whether by hand or by script, you are competing with an AI agent that is 100x faster, 10x cheaper, and never gets bored.
We’ve entered the era of the Commodity Test. With LLM-driven test generation and self-healing autonomous agents, the act of "writing and running a test" has been commoditized. The market value of being the person who "clicks the buttons" or "maintains the Selenium scripts" has plummeted. Many QA professionals are finding their roles marginalized as organizations realize that execution is no longer the bottleneck, judgment is.
The Solution: This session is a wake-up call and a survival guide. We will explore the "Value Shift" from how to test to what and why to test. We’ll discuss why the next generation of quality leaders aren't "Automation Engineers" but "Quality Strategists" and "Risk Architects." The Takeaway: You’ll learn how to pivot your career from a "High-Volume Executioner" to a "High-Context Strategist." We will map out the specific skills, from AI auditing to observability-driven risk analysis, that will keep you indispensable in an automated world.
The Developer’s Compass: Why TDD is the Only Way to Survive the AI Gold Rush
We’ve entered the era of "vibe coding." With the rise of Generative AI and LLMs, code is being produced at a velocity we’ve never seen. It’s tempting to believe that the days of meticulous manual testing are over, that the AI is "smart enough" to get it right.
In reality, the opposite is true. In this session, David Burns argues that Test-Driven Development (TDD) is more critical now than it was twenty years ago. When code becomes a commodity, the developer’s primary value shifts from writing syntax to defining intent and verifying correctness.
We will explore a modern workflow where the developer acts as a Technical Director: defining the contract, locking down the tests as an immutable "Source of Truth," and utilizing AI coding agents to handle the implementation. We’ll specifically dive into how AI Planning features transform agents from black-box generators into predictable, logical collaborators. Learn how to stop "hoping" the AI got it right and start engineering with absolute confidence.
Key Takeaways
The Illusion of Productivity: Why "vibe coding" leads to technical debt and how to identify the "finish line" trap.
The "Hands-Off" Implementation: A practical workflow for pointing AI agents at failing tests without allowing them to move the goalposts.
Prompting for Intent: Shifting your mindset from asking for features to asking for verifiable behaviors.
The Power of the Plan: How to use agentic planning features to review architectural decisions before a single line of implementation code is written.
Maintaining Control: Why TDD is the ultimate guardrail for managing the complexity that AI conceals.
Securing the Future of AI Agents: Navigating the Risks of MCP and LLM Integration
As large language models (LLMs) gain the ability to act, browse, automate, and interact with real-world systems via the Model Context Protocol (MCP), they also expose new and unpredictable attack surfaces.
In this talk, I’ll introduce MCP — the protocol powering tool-using agents — and walk through the emerging security threats it brings, including tool injection, prompt exploits, session hijacking, and remote code execution. We’ll explore practical, field-tested defenses and governance strategies that can help teams build AI-enabled systems that are powerful and safe.
Whether you're a developer integrating LLMs or a manager responsible for shipping secure AI products, this session will equip you with the mental models, examples, and frameworks to secure your agentic architectures.
Key Takeaways
Understand MCP: What the Model Context Protocol is, how it works, and why it matters in agentic AI systems.
Recognize Security Risks: Learn the top threats including tool injection, session hijacking, and RCE — with real-world inspired examples.
Apply Defensive Design: Discover actionable mitigations like tool whitelisting, RBAC, sandboxing, and red teaming for AI workflows.
Go Beyond the Model: See why prompt injection and data leakage aren’t just "prompt problems" but architectural concerns.
Stay Ahead: Get a curated resource list of blogs, OWASP guidance, and security tools for AI risk management.
David Burns
Head of Developer Advocacy and Open Source
Bournemouth, United Kingdom
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top