
David Campbell
AI Risk Security Platform Lead at Scale, AI
Santa Cruz, California, United States
Actions
David Campbell is a seasoned technology leader with nearly 20 years of experience in Silicon Valley's startup ecosystem, now spearheading Responsible AI initiatives at Scale AI. As the Lead AI Risk Engineer, David has been pivotal in developing a cutting-edge AI Red Teaming platform that marries ethical AI practices with rigorous security evaluations. His expertise has been recognized by the U.S. Congress and highlighted by the White House, underscoring his commitment to shaping a safer AI ecosystem. David's work extends to collaborative initiatives, such as participating in JCDC.AI's Cyber Tabletop Exercise (TTX) organized by CISA, which explores AI-driven cyber threats and defenses. With a deep background in Security, Core Infrastructure, and Platform Engineering, David actively drives discussions and actions that integrate responsible AI principles into practical security frameworks, aiming to nurture robust, ethical AI applications across industries.
Links
Area of Expertise
Topics
Beyond Red Teaming: The Failures of AI Security and the Path Forward
AI security is failing in ways we can’t afford to ignore. While AI Red Teaming has been a crucial tool for identifying vulnerabilities, it’s only one piece of the puzzle. The attack surface for AI systems is expanding from adversarial ML exploits to supply chain attacks, model theft, and prompt injection. Meanwhile, security defenses remain reactive, fragmented, and often ineffective against emerging threats.
In this talk, we’ll go beyond red teaming to examine the fundamental failures of AI security—where defenses break down, why current approaches fall short, and what it will take to build truly resilient AI systems. We’ll explore bleeding-edge AI security research, including MITRE ATLAS, the OWASP LLM Top 10, adversarial defenses, and AI supply chain integrity. We’ll also introduce an AI Security Maturity Model, a framework that helps organizations move from ad-hoc defenses to proactive, scalable security strategies.
Along the way, we’ll highlight key breakthroughs—including Scale AI’s own innovations like J2 (Jailbreaking to Jailbreak), which uses AI to red team AI—and discuss what it takes to stay ahead of threats in an AI-driven world. Whether you're a security professional, researcher, or executive, you’ll walk away with actionable insights to fix the gaps in AI security before attackers do.
Ignore Previous Instructions: Embracing AI Red Teaming
In this talk, we will explore the journey of Red Teaming from its origins to its transformation into AI Red Teaming, highlighting its pivotal role in shaping the future of Large Language Models (LLMs) and beyond. Drawing from my firsthand experiences developing and deploying the largest generative red teaming platform to date, I will share insightful antidotes and real-world examples. We will explore how adversarial red teaming fortifies AI applications at every layer—protecting platforms, businesses, and consumers. This includes safeguarding the external application interface, reinforcing LLM guardrails, and enhancing the security of the LLMs' internal algorithms. Join me as we uncover the critical importance of adversarial strategies in securing the AI landscape.
Building Bulletproof Products: Embracing AI Red Teaming
In this talk, we’ll trace the evolution of Red Teaming, from its traditional roots to the cutting-edge practice of AI Red Teaming, demonstrating how it’s shaping the future of Large Language Models (LLMs) and other AI technologies. Drawing from my experience leading the development and deployment of the largest generative red teaming platform to date, I’ll share compelling real-world examples and personal insights. We’ll dive into how adversarial red teaming strengthens AI systems across all layers—protecting platforms, businesses, and consumers alike. From securing external application interfaces to fortifying LLM guardrails and improving the internal security of AI algorithms, we’ll explore the critical role of adversarial strategies in safeguarding the evolving AI landscape.
AI Red-Teaming, Hallucinations, & Risk
David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.
AI DevSummit 2025 Sessionize Event Upcoming
CackalackyCon Sessionize Event Upcoming
STL TechWeek Sessionize Event
SecjuiceCon 2025 Virtual Conference Sessionize Event
NVIDIA GTC 2025
Developers attend GTC to build new technical skills, connect with peers, and learn from leaders in their fields. From hands-on training to in-depth technical sessions, GTC provides a unique environment for advancing your expertise, tackling real-world challenges, and staying competitive in a rapidly evolving industry.
Business leaders recognize the significant impact AI is having on industries, from improved customer experiences to product innovation. With a broad range of sessions and networking opportunities, GTC delivers exclusive insights that can help your company thrive in the era of AI.
Apres-Cyber Slopes Summit 2025 Sessionize Event
DeveloperWeek 2025 Sessionize Event
ProductWorld 2025 Sessionize Event
Prompt Engineering Conference 2024 Sessionize Event
MLOps + Generative AI World 2024 Sessionize Event
Pacific Hackers Conference 2024 Sessionize Event
AI Summit Vancouver Sessionize Event
2024 All Day DevOps Sessionize Event
Cyber Back to School Sessionize Event
AI Infra Summit
David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.
BSides Kraków 2024 Sessionize Event
InfoSec Nashville 2024 (Call For Speakers & Workshops) Sessionize Event
SummerCon
Summercon is one of the oldest hacker conventions, and the longest running such conference in America. It helped set a precedent for more modern “cons” such as H.O.P.E. and DEF CON, although it has remained smaller and more personal. Summercon has been hosted in cities such as Pittsburgh, St. Louis, Atlanta, New York, Washington, D.C., Austin, Las Vegas, and Amsterdam.
AI Risk Summit + CISO Forum Sessionize Event
David Campbell
AI Risk Security Platform Lead at Scale, AI
Santa Cruz, California, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top