© Mapbox, © OpenStreetMap

Speaker

David Burns

David Burns

Head of Developer Advocacy and Open Source

Bournemouth, United Kingdom

Actions

David is the Chair of the W3C Browser Testing and Tools Working group and co-editor of the WebDriver specification, trying to ensure automation frameworks in browsers are interoperable. He was an engineering manager at Mozilla within Developer Experience working on tooling and infrastructure to help make a better web and now heads up the Developer Relations and Open Source Program Office at BrowserStack.

Area of Expertise

  • Information & Communications Technology
  • Media & Information

Topics

  • Software testing
  • Software Engineering
  • Software Development
  • Open Source Software
  • Agile software development
  • Selenium
  • Selenium WebDriver
  • Testing Automation
  • Automation & CI/CD
  • AI
  • AI Agents
  • AI and Cybersecurity

Debugging the Undebuggable: A Deep Dive into Test Failures and Root Cause Analysis

We've all been there—a test fails randomly, passes locally but fails in CI, and leaves no clear trace of what went wrong. This talk will explore advanced debugging techniques to identify and fix these elusive test failures. We’ll look at how to use browser logs, network captures, and test execution traces to uncover the true cause of test flakiness. I'll share real-world debugging scenarios from Selenium tests and how to apply structured root cause analysis to stabilize your test suite.

Flakiness in your tests isn't down to test framework

All too often we get frustrated that we have flakiness in our tests. We have been sold the lie that frameworks have the best auto-waiting until it fails us... and it's not anyone's fault.

Vibe Coding & Vibe Testing: How AI-Powered Shortcuts Create New Problems

n the age of AI-assisted development, tools like GitHub Copilot and ChatGPT promise unprecedented speed. We can generate code in moments, prototype faster, and ship features at a breakneck pace. But this rapid-fire approach often leads to vibe coding, writing code without a foundational understanding of the underlying language, framework, or browser.

This talk explores how this over-reliance on AI is creating a new class of problems. When developers don’t truly grasp how their code works, they can’t anticipate edge cases, performance issues, or security vulnerabilities. This often leads to vibe testing, a reactive and superficial form of testing where the goal is simply to "make it work" rather than to ensure robustness.

I will discuss real-world examples where AI-generated code has created unintended consequences, showing how these shortcuts can lead to hard-to-diagnose bugs and technical debt. I'll also offer practical strategies for both developers and testers to strike a balance: using AI as a powerful learning and productivity tool, while still prioritizing the fundamental skills that are essential for building high-quality, maintainable software. This session is for anyone who wants to move beyond the AI hype and build a sustainable, quality-focused development practice.

The Empathy-Driven Test Plan: How to Uncover User Pain Points, Not Just Bugs

Many teams treat Quality Assurance as a checklist: Does the feature work? Are there any functional bugs? But often, a technically flawless product still delivers a frustrating user experience—full of friction, confusion, and subtle design flaws that drive customers away. These are pain points, and they are the bugs QA is missing.

This session introduces the Empathy-Driven Test Plan (EDTP), a practical framework for integrating user-centric design principles directly into the QA process. We will move beyond writing tests based on requirements documents and instead show you how to design tests around user personas, empathy maps, and user journey mapping. You’ll learn how to proactively identify and prioritize the usability, clarity, and emotional impact of your software, ensuring your testing efforts focus on uncovering critical friction points, not just checking boxes, and using AI as a tool, not as a replacement.

Securing the Future of AI Agents: Navigating the Risks of MCP and LLM Integration

As large language models (LLMs) gain the ability to act, browse, automate, and interact with real-world systems via the Model Context Protocol (MCP), they also expose new and unpredictable attack surfaces.

In this talk, I’ll introduce MCP — the protocol powering tool-using agents — and walk through the emerging security threats it brings, including tool injection, prompt exploits, session hijacking, and remote code execution. We’ll explore practical, field-tested defenses and governance strategies that can help teams build AI-enabled systems that are powerful and safe.

Whether you're a developer integrating LLMs or a manager responsible for shipping secure AI products, this session will equip you with the mental models, examples, and frameworks to secure your agentic architectures.

Key Takeaways
Understand MCP: What the Model Context Protocol is, how it works, and why it matters in agentic AI systems.
Recognize Security Risks: Learn the top threats including tool injection, session hijacking, and RCE — with real-world inspired examples.
Apply Defensive Design: Discover actionable mitigations like tool whitelisting, RBAC, sandboxing, and red teaming for AI workflows.
Go Beyond the Model: See why prompt injection and data leakage aren’t just "prompt problems" but architectural concerns.
Stay Ahead: Get a curated resource list of blogs, OWASP guidance, and security tools for AI risk management.

David Burns

Head of Developer Advocacy and Open Source

Bournemouth, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top