Session
When AI Agents Go Rogue: Securing the New Attack Surface
AI agents are going from helpful assistants to autonomous teammates with real system access - but what happens when they go rogue? As AI agents gain access to repositories, infrastructure, and sensitive data through the Model Context Protocol (MCP), we're seeing new attack vectors that traditional security tools can't handle: AI worms that self-replicate through prompts, malicious MCP servers, and automated credential theft at unprecedented scale. GitHub became the first platform to build secret scanning directly into AI tool calls. We'll demonstrate live attacks, show real-time protection in action, and give you practical strategies to secure your AI workflows today - before your AI agents accidentally become your biggest security vulnerability.
Andrea Griffiths
Senior DevRel and Hypewoman at GitHub
Sarasota, Florida, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top