David Campbell
Head of AI Security at Scale, AI
Boston, Massachusetts, United States
Actions
David Campbell is Head of AI Security at Scale AI, where he architects and leads some of the most advanced AI red teaming and resilience programs in the world. A two-decade veteran of Silicon Valley, David built his career at the intersection of infrastructure, security, and developer experience before becoming one of the industry’s most trusted voices on AI risk.
He pioneered Discovery, one of the first and largest large-scale AI Red Teaming platforms deployed across governments and Fortune 100 companies. His work has shaped how organizations probe, stress-test, and harden generative models against real adversaries, operational failures, and emergent behavior.
David’s expertise is sought globally. He has briefed the U.S. Congress, the White House, U.K. Parliament, NATO, Korea AISI, UK AISI, Qatar NCSA, and senior public-sector leaders on AI risk, misuse, and national resilience. He is a founding member of OWASP AIVSS and a core member of AIUC-1, the industry consortium defining how enterprises evaluate, insure, and underwrite AI systems. He also contributes to collaborative defense efforts such as CISA’s JCDC.AI series on AI-driven cyber threats.
Before Scale, David shaped platform security and engineering culture at Uber, DoorDash, and Nest Labs, where he built systems and practices that scaled to thousands of engineers. Across his career, he has been recognized for turning fragmented engineering environments into resilient, high-trust, high-standards organizations.
Today, David helps companies bridge the gap between rapid AI innovation and enterprise-grade safety. His mission is simple: align technology with the future of the business, and ensure that AI systems are deployed responsibly, securely, and with confidence.
Area of Expertise
Topics
AI Security Isn’t Broken. Our Mental Model Is.
When AI Can Act, Security Changes
Prompt injection exposed a real flaw in how we thought about controlling AI systems. But as models gained tools, memory, and autonomy, the security problem quietly moved.
Today’s failures aren’t about what models can be tricked into saying. They’re about what systems allow AI to do once it’s trusted to act. Agents inherit authority, permissions, and assumptions we never redesigned for.
This talk reframes AI security around systems, identity, and trust. It explains why securing prompts misses the real risk, how agents change the threat model, and where security actually needs to live now that AI is part of real workflows.
Domain-Limited General Intelligence: Before Things Go Too Far
Artificial intelligence is advancing faster than our safety frameworks can keep up, and the industry is drifting toward architectures that carry far more risk than we acknowledge. In this talk, David Campbell introduces Domain-Limited General Intelligence, a new conceptual tier that sits between today’s narrow systems and the open-ended ambitions of AGI and ASI. DLGI represents a path to smarter, more capable models that can generalize within defined boundaries without crossing into the dangerous territory of unbounded agency.
Attendees will learn why DLGI may be the safest evolutionary step for AI development, how it differs from traditional alignment strategies, and why unrestrained pushes toward broader generality create avoidable failure modes. David will break down real-world examples of emergent behavior, misaligned optimization, and adversarial dynamics, showing how DLGI offers a practical way to contain these risks.
This session gives practitioners, leaders, and researchers a new mental model for building powerful AI systems while preserving control, predictability, and trust. Before things go too far, we need a better tier of intelligence. This is the case for building it.
AI Red Teaming Room
Step into the AI Red Teaming Room and join experts from Scale AI for an interactive, hands-on experience where you’ll get to play the role of an adversary. In this session, you won’t just learn about AI vulnerabilities — you’ll exploit them. Engage directly in guided exercises designed to expose weaknesses in language models and other AI systems. Try your hand at crafting adversarial prompts to manipulate model behavior, bypass safeguards, and trigger unintended outputs.
Whether you're a security professional, AI researcher, policy expert, or just curious about how AI can go wrong, this is your chance to explore the limits of today's AI systems in a safe, controlled environment. Alongside the red-teaming challenges, you'll learn how these same systems can be defended, evaluated, and improved.
No prior experience with red teaming required — just bring your curiosity. Take 15–20 minutes to stop by, test your skills, and walk away with a deeper understanding of both the power and the fragility of modern AI.
AI by the Bay Sessionize Event
Agile + DevOpsDays Des Moines 2025 Sessionize Event
4th Annual Cyber Governance & Assurance Conference
Lead the AI Red Teaming Workhop and CTF teaching people about how and why AI Red Teaming is important.
AI Risk Summit 2025 Sessionize Event
AI DevSummit 2025 Sessionize Event
cackalackycon Sessionize Event
STL TechWeek 2025 Sessionize Event
SecjuiceCon 2025 Virtual Conference Sessionize Event
NVIDIA GTC 2025
Developers attend GTC to build new technical skills, connect with peers, and learn from leaders in their fields. From hands-on training to in-depth technical sessions, GTC provides a unique environment for advancing your expertise, tackling real-world challenges, and staying competitive in a rapidly evolving industry.
Business leaders recognize the significant impact AI is having on industries, from improved customer experiences to product innovation. With a broad range of sessions and networking opportunities, GTC delivers exclusive insights that can help your company thrive in the era of AI.
Apres-Cyber Slopes Summit 2025 Sessionize Event
Devnexus 2025 Sessionize Event
DeveloperWeek 2025 Sessionize Event
ProductWorld 2025 Sessionize Event
Prompt Engineering Conference 2024 Sessionize Event
MLOps + Generative AI World 2024 Sessionize Event
Pacific Hackers Conference 2024 Sessionize Event
AI Summit Vancouver Sessionize Event
2024 All Day DevOps Sessionize Event
Cyber Back to School Sessionize Event
AI Infra Summit
David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.
BSides Kraków 2024 Sessionize Event
InfoSec Nashville 2024 (Call For Speakers & Workshops) Sessionize Event
SummerCon
Summercon is one of the oldest hacker conventions, and the longest running such conference in America. It helped set a precedent for more modern “cons” such as H.O.P.E. and DEF CON, although it has remained smaller and more personal. Summercon has been hosted in cities such as Pittsburgh, St. Louis, Atlanta, New York, Washington, D.C., Austin, Las Vegas, and Amsterdam.
AI Risk Summit + CISO Forum Sessionize Event
David Campbell
Head of AI Security at Scale, AI
Boston, Massachusetts, United States
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top