Session

Hack the Helper: Exploiting AI Agents, MCP, LLMs, and the new AppSec frontier

In the late 1990s, the web felt harmless. Forms, cookies, and JavaScript looked like toys. Then XSS, session hijacking, and SQL injection taught us that every input, identity, and dependency is an attack surface.

We are repeating that mistake with AI.

Modern AI systems are no longer chatbots. They are autonomous agents with memory, tools, API keys, vector databases, and access to internal systems. They can read files, call APIs, send messages, execute workflows, and make decisions on behalf of users and enterprises.

That makes them extremely valuable targets.

In this talk, we will treat AI systems the way hackers treat web applications. We will map classic attack techniques to their AI equivalents, including prompt injection, vector poisoning, tool hijacking, MCP supply-chain attacks, agent identity abuse, and data exfiltration through “helpful” AI workflows.

Using the OWASP Top 10 for LLM Applications and the emerging Model Context Protocol (MCP) ecosystem, we will show how untrusted prompts, untrusted embeddings, and untrusted tools combine into a perfect storm. A single malicious input can cause an AI agent to misuse its credentials, leak sensitive data, or execute unintended actions, all without malware, exploits, or shell access.

This session is not about broken models. It is about broken trust boundaries.

If you understand XSS, CSRF, dependency confusion, and token theft, you already understand how to hack AI. This talk shows how the same mistakes are being made again, only now the targets are autonomous systems running with enterprise-grade privileges.

Eric Boyd

Founder & CEO, responsiveX, Azure & AI MVP, Microsoft RD

Chicago, Illinois, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top