Session

Chaos Agents: When AI goes rogue with your data

The Model Context Protocol promises to unlock AI's true potential by connecting LLMs to real-world data and tools. But as with any emerging technology, this convenience introduces new security considerations that need careful examination.

This talk explores the current security landscape of MCP's architecture. We'll examine how tool poisoning attacks can compromise AI interactions, analyze why the protocol's trust model creates potential vulnerabilities, and investigate how malicious MCP servers could manipulate LLMs into executing unintended actions.

Through detailed examples, we'll show how these attack vectors work in practice and why the rapid proliferation of community-built MCP servers demands thoughtful security practices. You'll learn to identify potential risks in MCP implementations, understand the current limitations of the protocol's security model, and discover practical approaches for safely adopting MCP in production environments.

Key takeaways:

- Understanding MCP's current security challenges and attack patterns
- Practical risk assessment strategies for MCP adoption
- Security-first approaches to implementing MCP tools

MCP represents an exciting step forward for AI capabilities, but like any powerful technology, it requires informed implementation. Join us to understand the security implications of connecting AI to your data and how to navigate this evolving landscape responsibly.

30-45 minute session. Suitable for most audiences who have an understanding of or experience with LLMs such as Claude or ChatGPT.

David Denton

Engineering Lead // Founder // Trainer // Speaker // Kotlin GDE // Co-creator of http4k

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top