Speaker

Lorenzo Satta Chiris

Lorenzo Satta Chiris

Director of Excode

Exeter, United Kingdom

Actions

Lorenzo Satta Chiris is an AI researcher and systems engineer specialising in agent autonomy, multi-agent orchestration, and alignment. He co-developed the AURA framework, a risk assessment model for quantifying and monitoring autonomy in AI agents, and leads ongoing research on agent systems, adversarial LLM behaviour, alignment formalisation, and post-AGI governance as a Global Excellence Engineering and Entrepreneurship scholar at the University of Exeter.

Lorenzo has delivered talks at Google DevFest, Microsoft Azure AI Connect, AWS Agentic Hackathon, MLCon, the Cornish Tech Showcase, the Dutch AI Conference and others on guardrailed multi-agent systems, agentic systems in production, and human-AI interaction design. He serves as President of Google Developer Group Exeter and directed the world's largest student-led coding and AI bootcamps (2,500+ participants), winning Tech South West's Best & Most Innovative Organisation award.

As CTO of Oorbital, he architects and deploys guardrailed multi-agent systems in production, integrating RAG pipelines, LLM orchestration, and safety guardrails for industry clients. He also led a 500-student field study on multi-agent architectures for personalised education, worked on enterprise agent upskilling and consulting, and developed agentic frameworks for orchestration, the agentic web, and agent safety.

Through his blog, read by over 200,000 tech enthusiasts worldwide, and his work as a technical writer, Lorenzo connects AI research to practitioners, educators, and the broader public. He brings a rare profile to the stage, sitting at the intersection of production AI engineering, safety research, and large-scale education, focused on bringing safe, useful autonomy from lab to lecture hall to real-world operations.

Area of Expertise

  • Business & Management
  • Government, Social Sector & Education
  • Information & Communications Technology

Topics

  • Coding
  • Teach Tech
  • Teach Programming
  • Python
  • AI
  • Teach AI
  • Agents
  • AI Agents
  • Multi-Agents System
  • Autonomous Agents
  • Agentic AI / Autonomous Agents
  • SWE Agents
  • Conversational AI and virtual agents
  • Educational Technology
  • Artificial Intelligence in Higher Education
  • Computer Science Education
  • AI Alignement
  • AI Safety
  • Education
  • Technology
  • ML
  • Governance
  • Governance risk and compliance
  • Higher Education
  • Computer Science
  • Engineering

With Great Autonomy comes great Responsibility: Building Safe & Ethical AI Agents in Production

As we rush to deploy autonomous AI agents in production, we're creating systems with unprecedented impact and decision-making power - from managing support tickets to executing financial transactions and supporting clinical decisions. But, as uncle Ben (Spiderman) wisely told us, with great autonomy comes great responsibility (and potential liability). Recent production failures (Claude Blackmailing, OpenAI suicide case, Grok tweets, Privacy scandals, Replit DB Deletion, Air Canada’s Chatbot fictional policies lawsuit….) underline the principle: if you think safety is expensive, try an accident. Because even if your AI agent won't pass the Turing Test, it might fail your unit tests... and then delete them.
Drawing from production deployments, running AI upskilling programmes, and co-authoring the AURA (Agent Autonomy Risk Assessment) framework, this talk presents a practical safety playbook for developers and product teams. We will show how theoretical AI principles can be turned into implementable code patterns and best practices that work with Google's Gemini API and Vertex AI, that every developer needs before shipping their next AI feature. This talk provides immediately actionable patterns that prevent your AI agent from becoming the next cautionary tale, preventing Murphy’s Law to turn into full Age of Ultron style.

Tech Without Tears: Bits and Bites on How to Teach Tech

Would you stop learning French because Google Translator exists?
And should you abandon/not learn coding because of AI?
Who will be obsolete first: the CEO of NVIDIA or coding?
Should you teach your coworker to code? What about your child?

To these questions, I answer with the words of Steve Jobs:
"Everybody should learn to program a computer, because it teaches you how to think."

Coding is not merely a set of words. It is a way of thinking, a problem-solving toolkit, and an approach to understanding the world. By teaching tech, we create connections—both human and neural—that unlock the potential for creativity, innovation, and a better future.

In my talk, I will cover why and how to learn and teach coding and tech. Focusing on how to explain tech concepts, leverage AI to teach/learn tech, and democratise tech within your environment.

AURA: A Practical Risk Framework for Autonomous AI Agents

Autonomous AI agents are moving from experiments into systems that touch real customers, money and infrastructure, yet many teams still improvise their safety practices, with maturity and governance representing one of the major barriers to AI deployment. This session presents AURA, an open source Agent Autonomy Risk Assessment framework developed from research and production deployments at the University of Exeter. We turn diffuse concerns about “rogue agents” into concrete risk dimensions and a quantitative scoring model that engineers, product owners and risk stakeholders can use in a shared, repeatable way.
Using realistic failure scenarios for tool using agents, we show how AURA helps you reason about autonomy levels, capability scope, tool access, oversight mechanisms and monitoring. The focus is on integrating risk thinking into your existing MLOps stack through checklists, scorecards and design templates, managing Governance and Compliance requirements. Attendees will leave with a reference risk scoring tool, example assessments and a set of practical steps for embedding agent risk reviews into their current development and deployment lifecycle.

A Post-AGI World: How We Share Responsibility For The Next Intelligence Leap

AGI is usually talked about as either hype to overinvest in or catastrophe to fear, which makes it hard to treat it rationally and professionally. In this inspiring keynote, inspired from a published paper on the topic, I reframe AGI as a technical, political, economic transition that needs to be planned, not just survived. Using clear, accessible scenarios drawn from original research, I sketch plausible post AGI futures and transition and how organisations can brace for impact. We will talk about scalable intelligence, autonomy, risks, potentiality and governance. The talk highlights how responsibility for the long-term trajectory of intelligent systems is quietly distributed across developers, companies, regulators and civil society, and what it looks like in practice to act as a responsible steward in your own role.

Designing Human–AI Interactions: The UX/UI of the Future

As AI agents become autonomous decision-makers rather than passive tools, the way humans interact with them must evolve. This session explores the interaction models, UX patterns and interface primitives that will define human–AI collaboration in 2026 and beyond. We will examine how autonomy, reversibility, transparency and memory control reshape UI design, and why traditional chatbot and static website paradigms are no longer sufficient.
Using production examples from modern agentic systems, we walk through practical design patterns for proactive agents, multi-step collaborative workflows, alignment-aware interfaces and safety-centred feedback loops. Each pattern is paired with concrete implementation guidance developers can apply immediately to their own applications.

Orchestrating Guardrailed Multi-Agent Systems on Azure AI Foundry

Multi-agent systems are powerful, but without structured orchestration and orientation they become unpredictable and impossible to control. This session shows how to build reliable, governable multi-agent workflows using Azure AI Foundry and the Microsoft Agent Framework. We break down a real scenario into coordinated agent roles – researcher, planner, critic, executor, safety-checker – each with clear autonomy boundaries and escalation rules, along with MCP tools and human-in-the-loop protocols. Foundry’s native multi-agent layer replaces custom routing logic with explicit schemas, deterministic interactions, and policy-driven constraints that keep behaviour stable and observable.

DevOpsCon & MLCon 2026 Sessionize Event Upcoming

April 2026 Amsterdam, The Netherlands

AI in The New Era Sessionize Event Upcoming

April 2026

Dutch AI Conference 2026 Sessionize Event

March 2026 Amsterdam, The Netherlands

DevFest London 2025 Sessionize Event

November 2025 London, United Kingdom

AI Upskilling Sessions

Speaker and CTO. Teaching University Staff and Students how to leverage AI and develop AI skills.

June 2025

The BIG Cornish Tech Showcase Sessionize Event

February 2025

ExAI

Organiser and Guest Speaker on the University of Exeter Theoretical AI course.

January 2025 Exeter, United Kingdom

Entrepreneurship Fest

The largest University of Exeter Entrepreneurial event with 100+ attendees.
I served as organiser and panel moderator.

November 2024 Exeter, United Kingdom

Excode

The Largest UK coding Bootcamp. I was Director of the project as well as lecturer.

October 2024 Exeter, United Kingdom

Lorenzo Satta Chiris

Director of Excode

Exeter, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top