Session

Building Bulletproof Products: Embracing AI Red Teaming

In this talk, we’ll trace the evolution of Red Teaming, from its traditional roots to the cutting-edge practice of AI Red Teaming, demonstrating how it’s shaping the future of Large Language Models (LLMs) and other AI technologies. Drawing from my experience leading the development and deployment of the largest generative red teaming platform to date, I’ll share compelling real-world examples and personal insights. We’ll dive into how adversarial red teaming strengthens AI systems across all layers—protecting platforms, businesses, and consumers alike. From securing external application interfaces to fortifying LLM guardrails and improving the internal security of AI algorithms, we’ll explore the critical role of adversarial strategies in safeguarding the evolving AI landscape.

David Campbell

AI Risk Security Platform Lead at Scale, AI

Santa Cruz, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top