Session
Plugging Security Holes in LLMs and MCP Servers: Insight From 5,000 Customer Calls
The long-running joke so far has been “The S in MCP stands for security” and this is no secret as just about every organization is talking about it. Aside from prompt injections, MCP Server security is arguably the biggest issue in the AI security world right now.
In over 5,000 customer calls in the past year around AI, the two major conversation points are:
1. Securing Agentic infrastructure, MCP Server connectivity (for both users and Agents), and ensuring a proper AI Gateway exists to secure/observe traffic.
2. Access Control and OAuth for connecting to MCP Servers and LLMs.
With both stdio (like libraries/modules) and streamable http (an MCP Server sitting in someones environment), organizations need to ask themselves how they're implementing auth at both the system and user level to access these MCP servers (and from the Agents), what tools are exposed from the MCP Servers, and how the tunnel (from user/agent to MCP Server) is observed and secured.
When organizations are implementing LLM providers, the same thought comes into mind - who can use these LLMs and what are they able to do with the access?
In this session, you'll learn how to plug security holes by understanding the current standards (stdio and streamable http), authentication at both the system and user level (jwt, oAuth, and OIDC), where an AI gateway can help secure traffic throughout the tunnel, and how to specify what tools should be exposed from MCP Servers with traffic policies.
Michael Levan
Building High-Performing Agentic and Kubernetes Environments | AI Architect | CNCF Ambassador | 4x Published Author & International Public Speaker
Saddle Brook, New Jersey, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top