Session
Securing the Future of AI Agents: Navigating the Risks of MCP and LLM Integration
As large language models (LLMs) gain the ability to act, browse, automate, and interact with real-world systems via the Model Context Protocol (MCP), they also expose new and unpredictable attack surfaces.
In this talk, I’ll introduce MCP — the protocol powering tool-using agents — and walk through the emerging security threats it brings, including tool injection, prompt exploits, session hijacking, and remote code execution. We’ll explore practical, field-tested defenses and governance strategies that can help teams build AI-enabled systems that are powerful and safe.
Whether you're a developer integrating LLMs or a manager responsible for shipping secure AI products, this session will equip you with the mental models, examples, and frameworks to secure your agentic architectures.
Key Takeaways
Understand MCP: What the Model Context Protocol is, how it works, and why it matters in agentic AI systems.
Recognize Security Risks: Learn the top threats including tool injection, session hijacking, and RCE — with real-world inspired examples.
Apply Defensive Design: Discover actionable mitigations like tool whitelisting, RBAC, sandboxing, and red teaming for AI workflows.
Go Beyond the Model: See why prompt injection and data leakage aren’t just "prompt problems" but architectural concerns.
Stay Ahead: Get a curated resource list of blogs, OWASP guidance, and security tools for AI risk management.
David Burns
Head of Developer Advocacy and Open Source
Bournemouth, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top