Session

Failure Modes: What Breaks When AI Agents Break

Every AI agent has failure modes. Some are obvious: hallucinations, prompt injection, goal misalignment. Others are subtle: the slow drift of autonomous systems away from original intent, the compounding of micro-errors into macro-disasters, the black-box nature of emergent behaviors.
The organizations that survive the agentic transition won't be those with perfect AI: they'll be those with robust failure handling.
Drawing from Heather's experience managing the 2012 Evernote breach (50M users compromised overnight) and her work with startups navigating "what breaks when you scale," this talk delivers:
A taxonomy of agent failure modes: hallucinations, drift, optimization pathology, goal misalignment
Detection strategies for identifying agent failures before they cascade
Human-in-the-loop architectures that maintain oversight without creating bottlenecks
Organizational resilience patterns: when to trust agents, when to verify, when to override
Case study analysis of real agent failures in production systems

Duration: 45–60 minutes (keynote) or half-day workshop
Audience: CROs, security leaders, engineering leaders, board members
Legacy Roots: Evernote breach experience + startup scaling expertise

Heather Wilde Renze

Unicorn Whisperer, CTO & Angel Investor

Las Vegas, Nevada, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top