Session
Runtime Safety for Autonomous Agents with Azure AI
As AI agents gain autonomy and access to tools, traditional prompt and output filters no longer provide sufficient safeguards. Many risks emerge between an agent’s decision and its execution, especially in multi-agent systems where behaviour becomes harder to predict.
This talk introduces a Python-based runtime enforcement layer for LLM agents that works with Azure OpenAI, LangChain, and Microsoft AutoGen. The system enables real-time policy checks, logging, and blocking of unsafe tool calls, without modifying the agent’s core logic. It can be deployed via Azure Functions for lightweight runtime control and integrates with the Model Context Protocol (MCP) for traceability.
I will cover:
Where runtime risks emerge in agent behaviour
How to intercept and enforce policy at runtime
Python-based enforcement examples using Azure AI
What MCP enables for multi-agent oversight and debugging
Lessons from early deployments and open safety questions
This session is designed for developers and engineers building autonomous or multi-agent systems who aim to enhance runtime safety and operational visibility.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top