Session

AURA: A Practical Risk Framework for Autonomous AI Agents

Autonomous AI agents are moving from experiments into systems that touch real customers, money and infrastructure, yet many teams still improvise their safety practices, with maturity and governance representing one of the major barriers to AI deployment. This session presents AURA, an open source Agent Autonomy Risk Assessment framework developed from research and production deployments at the University of Exeter. We turn diffuse concerns about “rogue agents” into concrete risk dimensions and a quantitative scoring model that engineers, product owners and risk stakeholders can use in a shared, repeatable way.
Using realistic failure scenarios for tool using agents, we show how AURA helps you reason about autonomy levels, capability scope, tool access, oversight mechanisms and monitoring. The focus is on integrating risk thinking into your existing MLOps stack through checklists, scorecards and design templates, managing Governance and Compliance requirements. Attendees will leave with a reference risk scoring tool, example assessments and a set of practical steps for embedding agent risk reviews into their current development and deployment lifecycle.

Lorenzo Satta Chiris

Director of Excode

Exeter, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top