Session

With Great Autonomy comes great Responsibility: Building Safe & Ethical AI Agents in Production

As we rush to deploy autonomous AI agents in production, we're creating systems with unprecedented impact and decision-making power - from managing support tickets to executing financial transactions and supporting clinical decisions. But, as uncle Ben (Spiderman) wisely told us, with great autonomy comes great responsibility (and potential liability). Recent production failures (Claude Blackmailing, OpenAI suicide case, Grok tweets, Privacy scandals, Replit DB Deletion, Air Canada’s Chatbot fictional policies lawsuit….) underline the principle: if you think safety is expensive, try an accident. Because even if your AI agent won't pass the Turing Test, it might fail your unit tests... and then delete them.
Drawing from production deployments, running AI upskilling programmes, and co-authoring the AURA (Agent Autonomy Risk Assessment) framework, this talk presents a practical safety playbook for developers and product teams. We will show how theoretical AI principles can be turned into implementable code patterns and best practices that work with Google's Gemini API and Vertex AI, that every developer needs before shipping their next AI feature. This talk provides immediately actionable patterns that prevent your AI agent from becoming the next cautionary tale, preventing Murphy’s Law to turn into full Age of Ultron style.

Lorenzo Satta Chiris

Director of Excode

Exeter, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top