Session
AI Security Isn’t Broken. Our Mental Model Is.
When AI Can Act, Security Changes
Prompt injection exposed a real flaw in how we thought about controlling AI systems. But as models gained tools, memory, and autonomy, the security problem quietly moved.
Today’s failures aren’t about what models can be tricked into saying. They’re about what systems allow AI to do once it’s trusted to act. Agents inherit authority, permissions, and assumptions we never redesigned for.
This talk reframes AI security around systems, identity, and trust. It explains why securing prompts misses the real risk, how agents change the threat model, and where security actually needs to live now that AI is part of real workflows.
David Campbell
Head of AI Security at Scale, AI
Boston, Massachusetts, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top