Session
Beyond Red Teaming: The Failures of AI Security and the Path Forward
AI security is failing in ways we can’t afford to ignore. While AI Red Teaming has been a crucial tool for identifying vulnerabilities, it’s only one piece of the puzzle. The attack surface for AI systems is expanding from adversarial ML exploits to supply chain attacks, model theft, and prompt injection. Meanwhile, security defenses remain reactive, fragmented, and often ineffective against emerging threats.
In this talk, we’ll go beyond red teaming to examine the fundamental failures of AI security—where defenses break down, why current approaches fall short, and what it will take to build truly resilient AI systems. We’ll explore bleeding-edge AI security research, including MITRE ATLAS, the OWASP LLM Top 10, adversarial defenses, and AI supply chain integrity. We’ll also introduce an AI Security Maturity Model, a framework that helps organizations move from ad-hoc defenses to proactive, scalable security strategies.
Along the way, we’ll highlight key breakthroughs—including Scale AI’s own innovations like J2 (Jailbreaking to Jailbreak), which uses AI to red team AI—and discuss what it takes to stay ahead of threats in an AI-driven world. Whether you're a security professional, researcher, or executive, you’ll walk away with actionable insights to fix the gaps in AI security before attackers do.
David Campbell
AI Risk Security Platform Lead at Scale, AI
Santa Cruz, California, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top