Marjia Siddik
Co-Founder & CEO at Safentic
Dublin, Ireland
Actions
Marjia Siddik is the co-founder and CEO of Safentic, a runtime enforcement layer for multi-agent systems. She is a published AI safety researcher with experience in national and international AI policy initiatives, and is the youngest Microsoft MVP in the field of Trustworthy AI. Her work has been recognised through over 15 academic and industry scholarships.
Links
Area of Expertise
Topics
Runtime Safety for Autonomous Agents with Azure AI
As AI agents gain autonomy and access to tools, traditional prompt and output filters no longer provide sufficient safeguards. Many risks emerge between an agent’s decision and its execution, especially in multi-agent systems where behaviour becomes harder to predict.
This talk introduces a Python-based runtime enforcement layer for LLM agents that works with Azure OpenAI, LangChain, and Microsoft AutoGen. The system enables real-time policy checks, logging, and blocking of unsafe tool calls, without modifying the agent’s core logic. It can be deployed via Azure Functions for lightweight runtime control and integrates with the Model Context Protocol (MCP) for traceability.
I will cover:
Where runtime risks emerge in agent behaviour
How to intercept and enforce policy at runtime
Python-based enforcement examples using Azure AI
What MCP enables for multi-agent oversight and debugging
Lessons from early deployments and open safety questions
This session is designed for developers and engineers building autonomous or multi-agent systems who aim to enhance runtime safety and operational visibility.
Keeping AI Agents in Check: Runtime Safety for Autonomous Systems
As AI agents gain autonomy and start interacting in complex ways, traditional input and output filters fall short. Many risks arise between the time an agent decides on an action and when it executes that action. This is especially true in multi-agent systems, where coordination becomes increasingly difficult to predict.
This talk introduces a Python-based runtime safety layer for LLM agents built with frameworks like LangChain and Microsoft AutoGen. The layer enables real-time monitoring, policy checks, and intervention without modifying the agent’s core logic.
The system also supports interoperability through the Model Context Protocol (MCP), which tracks agent decisions across tools and execution steps.
I will cover:
1. How multi-agent systems introduce new risks
2. What runtime enforcement enables beyond static filtering
3. Examples of real-time oversight using Python-based tools
4. How MCP supports debugging and traceability
5. Key lessons and open challenges in agent safety
This session is designed for developers, engineers, and researchers working with AI agents who want to build more reliable and aligned systems.
Addressing Algorithmic Bias: Fostering Fair and Inclusive Healthcare
The integration of AI algorithms into healthcare has revolutionized patient care, with the potential to improve diagnosis, treatment, and overall health outcomes. However, this transformative technology also introduces the risk of algorithmic bias, which can have detrimental consequences for patients and perpetuate existing health disparities.
What will be discussed:
- The dangers of algorithmic bias in healthcare, i.e. how bias can infiltrate AI algorithms and lead to unfair treatment of marginalized groups.
- How bias can arise from data collection to algorithm implementation, and how it can distort healthcare decisions and lead to disparities in care for diverse patients.
- Illustrate real-world implications of algorithmic bias in healthcare.
What the audience will learn:
- The significance of algorithmic bias in healthcare and its potential to harm patients.
- The dangers of biased AI algorithms, including misdiagnosis, delayed treatment, and poorer health outcomes for marginalized groups.
- The various ways in which bias can arise in healthcare algorithms, from data imbalances to flawed evaluation metrics.
Key Takeaways:
- Develop a comprehensive understanding of algorithmic bias in healthcare.
- Identify the potential sources of bias in AI algorithms.
- Become an advocate for responsible AI development and contribute to a healthier, more equitable future for all.
Addressing Algorithmic Bias: Fostering Fair and Inclusive Healthcare
The integration of AI into healthcare holds both promise and peril. While AI algorithms can improve diagnosis and treatment, they also risk perpetuating existing health disparities if biases aren't addressed.
This talk will explore:
The dangers of algorithmic bias, include misdiagnosis, delayed treatment, and poorer health outcomes for marginalized groups.
How bias can creep in at every stage of AI development, from flawed data to the metrics used for evaluation.
Real-world examples demonstrate the urgent need for change.
A potential solution: Althical, a platform that puts physicians at the center of AI development to create bias-reducing and explainable solutions.
DevFest Ireland 2025 Sessionize Event Upcoming
Global Azure & AI Community Day User group Sessionize Event
Impact the Future (IWD- 2024) Sessionize Event
Women Techmakers Belfast 2024 Sessionize Event
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top