David Campbell
Head of AI Security at Scale, AI
Boston, Massachusetts, United States
Actions
David Campbell is Head of AI Security at Scale AI, where he architects and leads some of the most advanced AI red teaming and resilience programs in the world. A two-decade veteran of Silicon Valley, David built his career at the intersection of infrastructure, security, and developer experience before becoming one of the industry’s most trusted voices on AI risk.
He pioneered Discovery, one of the first and largest large-scale AI Red Teaming platforms deployed across governments and Fortune 100 companies. His work has shaped how organizations probe, stress-test, and harden generative models against real adversaries, operational failures, and emergent behavior.
David’s expertise is sought globally. He has briefed the U.S. Congress, the White House, U.K. Parliament, NATO, Korea AISI, UK AISI, Qatar NCSA, and senior public-sector leaders on AI risk, misuse, and national resilience. He is a founding member of OWASP AIVSS and a core member of AIUC-1, the industry consortium defining how enterprises evaluate, insure, and underwrite AI systems. He also contributes to collaborative defense efforts such as CISA’s JCDC.AI series on AI-driven cyber threats.
Before Scale, David shaped platform security and engineering culture at Uber, DoorDash, and Nest Labs, where he built systems and practices that scaled to thousands of engineers. Across his career, he has been recognized for turning fragmented engineering environments into resilient, high-trust, high-standards organizations.
Today, David helps companies bridge the gap between rapid AI innovation and enterprise-grade safety. His mission is simple: align technology with the future of the business, and ensure that AI systems are deployed responsibly, securely, and with confidence.
Links
Area of Expertise
Topics
Beyond Red Teaming: The Failures of AI Security and the Path Forward
AI security is failing in ways we can’t afford to ignore. While AI Red Teaming has been a crucial tool for identifying vulnerabilities, it’s only one piece of the puzzle. The attack surface for AI systems is expanding from adversarial ML exploits to supply chain attacks, model theft, and prompt injection. Meanwhile, security defenses remain reactive, fragmented, and often ineffective against emerging threats.
In this talk, we’ll go beyond red teaming to examine the fundamental failures of AI security—where defenses break down, why current approaches fall short, and what it will take to build truly resilient AI systems. We’ll explore bleeding-edge AI security research, including MITRE ATLAS, the OWASP LLM Top 10, adversarial defenses, and AI supply chain integrity. We’ll also introduce an AI Security Maturity Model, a framework that helps organizations move from ad-hoc defenses to proactive, scalable security strategies.
Along the way, we’ll highlight key breakthroughs—including Scale AI’s own innovations like J2 (Jailbreaking to Jailbreak), which uses AI to red team AI—and discuss what it takes to stay ahead of threats in an AI-driven world. Whether you're a security professional, researcher, or executive, you’ll walk away with actionable insights to fix the gaps in AI security before attackers do.
Ignore Previous Instructions: Embracing AI Red Teaming
In this talk, we will explore the journey of Red Teaming from its origins to its transformation into AI Red Teaming, highlighting its pivotal role in shaping the future of Large Language Models (LLMs) and beyond. Drawing from my firsthand experiences developing and deploying the largest generative red teaming platform to date, I will share insightful anecdotes and real-world examples. We will explore how adversarial red teaming fortifies AI applications at every layer—protecting platforms, businesses, and consumers. This includes safeguarding the external application interface, reinforcing LLM guardrails, and enhancing the security of the LLMs' internal algorithms. Join me as we uncover the critical importance of adversarial strategies in securing the AI landscape.
Building Bulletproof Products: Embracing AI Red Teaming
In this talk, we’ll trace the evolution of Red Teaming, from its traditional roots to the cutting-edge practice of AI Red Teaming, demonstrating how it’s shaping the future of Large Language Models (LLMs) and other AI technologies. Drawing from my experience leading the development and deployment of the largest generative red teaming platform to date, I’ll share compelling real-world examples and personal insights. We’ll dive into how adversarial red teaming strengthens AI systems across all layers—protecting platforms, businesses, and consumers alike. From securing external application interfaces to fortifying LLM guardrails and improving the internal security of AI algorithms, we’ll explore the critical role of adversarial strategies in safeguarding the evolving AI landscape.
AI Red-Teaming, Hallucinations, & Risk
David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.
The Real Last Mile: De-Risking Generative AI in Production
While moving generative AI proofs of concept to production is often considered a key final phase in the journey to achieving results, efficiently assessing and mitigating ongoing security, safety, and performance issues present additional hills to climb. In this round of lightning talks, hear from three innovative companies tackling real-time gen AI risk assessment and mitigation. Through demos and real-world examples, you’ll understand tools and best practices for programmatically addressing LLM application vulnerabilities across the development life cycle.
Domain-Limited General Intelligence: Before Things Go Too Far
Artificial intelligence is advancing faster than our safety frameworks can keep up, and the industry is drifting toward architectures that carry far more risk than we acknowledge. In this talk, David Campbell introduces Domain-Limited General Intelligence, a new conceptual tier that sits between today’s narrow systems and the open-ended ambitions of AGI and ASI. DLGI represents a path to smarter, more capable models that can generalize within defined boundaries without crossing into the dangerous territory of unbounded agency.
Attendees will learn why DLGI may be the safest evolutionary step for AI development, how it differs from traditional alignment strategies, and why unrestrained pushes toward broader generality create avoidable failure modes. David will break down real-world examples of emergent behavior, misaligned optimization, and adversarial dynamics, showing how DLGI offers a practical way to contain these risks.
This session gives practitioners, leaders, and researchers a new mental model for building powerful AI systems while preserving control, predictability, and trust. Before things go too far, we need a better tier of intelligence. This is the case for building it.
Agile + DevOpsDays Des Moines 2025 Sessionize Event
4th Annual Cyber Governance & Assurance Conference
Lead the AI Red Teaming Workhop and CTF teaching people about how and why AI Red Teaming is important.
AI Risk Summit 2025 Sessionize Event
AI DevSummit 2025 Sessionize Event
cackalackycon Sessionize Event
STL TechWeek 2025 Sessionize Event
SecjuiceCon 2025 Virtual Conference Sessionize Event
NVIDIA GTC 2025
Developers attend GTC to build new technical skills, connect with peers, and learn from leaders in their fields. From hands-on training to in-depth technical sessions, GTC provides a unique environment for advancing your expertise, tackling real-world challenges, and staying competitive in a rapidly evolving industry.
Business leaders recognize the significant impact AI is having on industries, from improved customer experiences to product innovation. With a broad range of sessions and networking opportunities, GTC delivers exclusive insights that can help your company thrive in the era of AI.
Apres-Cyber Slopes Summit 2025 Sessionize Event
DeveloperWeek 2025 Sessionize Event
ProductWorld 2025 Sessionize Event
Prompt Engineering Conference 2024 Sessionize Event
MLOps + Generative AI World 2024 Sessionize Event
Pacific Hackers Conference 2024 Sessionize Event
AI Summit Vancouver Sessionize Event
2024 All Day DevOps Sessionize Event
Cyber Back to School Sessionize Event
AI Infra Summit
David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.
BSides Kraków 2024 Sessionize Event
InfoSec Nashville 2024 (Call For Speakers & Workshops) Sessionize Event
SummerCon
Summercon is one of the oldest hacker conventions, and the longest running such conference in America. It helped set a precedent for more modern “cons” such as H.O.P.E. and DEF CON, although it has remained smaller and more personal. Summercon has been hosted in cities such as Pittsburgh, St. Louis, Atlanta, New York, Washington, D.C., Austin, Las Vegas, and Amsterdam.
AI Risk Summit + CISO Forum Sessionize Event
David Campbell
Head of AI Security at Scale, AI
Boston, Massachusetts, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top