Session

Keeping AI Agents in Check: Runtime Safety for Autonomous Systems

As AI agents gain autonomy and start interacting in complex ways, traditional input and output filters fall short. Many risks arise between the time an agent decides on an action and when it executes that action. This is especially true in multi-agent systems, where coordination becomes increasingly difficult to predict.

This talk introduces a Python-based runtime safety layer for LLM agents built with frameworks like LangChain and Microsoft AutoGen. The layer enables real-time monitoring, policy checks, and intervention without modifying the agent’s core logic.

The system also supports interoperability through the Model Context Protocol (MCP), which tracks agent decisions across tools and execution steps.

I will cover:

1. How multi-agent systems introduce new risks

2. What runtime enforcement enables beyond static filtering

3. Examples of real-time oversight using Python-based tools

4. How MCP supports debugging and traceability

5. Key lessons and open challenges in agent safety

This session is designed for developers, engineers, and researchers working with AI agents who want to build more reliable and aligned systems.

Marjia Siddik

Co-Founder & CEO at Safentic

Dublin, Ireland

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top