Most Active Speaker

David Campbell

David Campbell

Head of AI Security at Scale, AI

Boston, Massachusetts, United States

Actions

David Campbell is Head of AI Security at Scale AI, where he architects and leads some of the most advanced AI red teaming and resilience programs in the world. A two-decade veteran of Silicon Valley, David built his career at the intersection of infrastructure, security, and developer experience before becoming one of the industry’s most trusted voices on AI risk.

He pioneered Discovery, one of the first and largest large-scale AI Red Teaming platforms deployed across governments and Fortune 100 companies. His work has shaped how organizations probe, stress-test, and harden generative models against real adversaries, operational failures, and emergent behavior.

David’s expertise is sought globally. He has briefed the U.S. Congress, the White House, U.K. Parliament, NATO, Korea AISI, UK AISI, Qatar NCSA, and senior public-sector leaders on AI risk, misuse, and national resilience. He is a founding member of OWASP AIVSS and a core member of AIUC-1, the industry consortium defining how enterprises evaluate, insure, and underwrite AI systems. He also contributes to collaborative defense efforts such as CISA’s JCDC.AI series on AI-driven cyber threats.

Before Scale, David shaped platform security and engineering culture at Uber, DoorDash, and Nest Labs, where he built systems and practices that scaled to thousands of engineers. Across his career, he has been recognized for turning fragmented engineering environments into resilient, high-trust, high-standards organizations.

Today, David helps companies bridge the gap between rapid AI innovation and enterprise-grade safety. His mission is simple: align technology with the future of the business, and ensure that AI systems are deployed responsibly, securely, and with confidence.

Badges

  • Most Active Speaker 2025
  • Most Active Speaker 2024

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Law & Regulation

Topics

  • Responsible AI
  • Responsible AI Principles
  • LLMs
  • cybersecurity
  • AI and Cybersecurity
  • Artificial Inteligence
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • AI Risk
  • AI Research
  • Platform Engineering
  • prompt engineering
  • prompt patterns
  • Red Teaming
  • Red Team
  • AI Red Teaming
  • Adversarial AI
  • Cyber Security basics
  • cyber security
  • cybersecurity awareness
  • Cybersecurity Strategy
  • Emerging Cybersecurity Topics
  • Artificial Intelligence and machine learning
  • Artificial Intelligence
  • artificial intelligence security
  • AI Security
  • AI Safety
  • AI risk management
  • artificial intelligence risk
  • Engineering Excellence
  • Engineering Culture
  • Engineering Culture & Leadership
  • Risk
  • Banking Technology
  • Industrial and Regulated Environment
  • Cybersecurity Governance and Risk Management
  • Risk Management
  • Governance risk and compliance
  • Risk Assessments
  • Cybersecurity Risk Management
  • Risk Mitigation
  • AI Agents
  • AI Bias
  • AI in Health
  • AI Ethics
  • AI in Banking
  • AI in Supply Chain
  • SAP supply chain
  • chief AI officer
  • AI in Government
  • AI in Finance
  • AI IN EDUCATION
  • AI in Healthcare
  • Ai in business
  • Cybersecurity Regulations and Compliance
  • Existential Risk
  • AI governance and regulatory compliance
  • Information Security Governance and Risk
  • supply chain risk Management
  • Third Party Risk Management
  • Vendor Risk Management
  • IT Risk Management
  • Identity
  • Identity and Access Management
  • Identity Management
  • identity & authentication
  • Identity Governance
  • AI Safety institute
  • Research & Development
  • Research in Information Technology
  • AI Readiness
  • AI Reliability Engineering

AI Security Isn’t Broken. Our Mental Model Is.

When AI Can Act, Security Changes

Prompt injection exposed a real flaw in how we thought about controlling AI systems. But as models gained tools, memory, and autonomy, the security problem quietly moved.

Today’s failures aren’t about what models can be tricked into saying. They’re about what systems allow AI to do once it’s trusted to act. Agents inherit authority, permissions, and assumptions we never redesigned for.

This talk reframes AI security around systems, identity, and trust. It explains why securing prompts misses the real risk, how agents change the threat model, and where security actually needs to live now that AI is part of real workflows.

Domain-Limited General Intelligence: Before Things Go Too Far

Artificial intelligence is advancing faster than our safety frameworks can keep up, and the industry is drifting toward architectures that carry far more risk than we acknowledge. In this talk, David Campbell introduces Domain-Limited General Intelligence, a new conceptual tier that sits between today’s narrow systems and the open-ended ambitions of AGI and ASI. DLGI represents a path to smarter, more capable models that can generalize within defined boundaries without crossing into the dangerous territory of unbounded agency.

Attendees will learn why DLGI may be the safest evolutionary step for AI development, how it differs from traditional alignment strategies, and why unrestrained pushes toward broader generality create avoidable failure modes. David will break down real-world examples of emergent behavior, misaligned optimization, and adversarial dynamics, showing how DLGI offers a practical way to contain these risks.

This session gives practitioners, leaders, and researchers a new mental model for building powerful AI systems while preserving control, predictability, and trust. Before things go too far, we need a better tier of intelligence. This is the case for building it.

AI Red Teaming Room

Step into the AI Red Teaming Room and join experts from Scale AI for an interactive, hands-on experience where you’ll get to play the role of an adversary. In this session, you won’t just learn about AI vulnerabilities — you’ll exploit them. Engage directly in guided exercises designed to expose weaknesses in language models and other AI systems. Try your hand at crafting adversarial prompts to manipulate model behavior, bypass safeguards, and trigger unintended outputs.

Whether you're a security professional, AI researcher, policy expert, or just curious about how AI can go wrong, this is your chance to explore the limits of today's AI systems in a safe, controlled environment. Alongside the red-teaming challenges, you'll learn how these same systems can be defended, evaluated, and improved.

No prior experience with red teaming required — just bring your curiosity. Take 15–20 minutes to stop by, test your skills, and walk away with a deeper understanding of both the power and the fragility of modern AI.

AI by the Bay Sessionize Event

November 2025 Oakland, California, United States

Agile + DevOpsDays Des Moines 2025 Sessionize Event

October 2025 Des Moines, Iowa, United States

4th Annual Cyber Governance & Assurance Conference

Lead the AI Red Teaming Workhop and CTF teaching people about how and why AI Red Teaming is important.

September 2025 Doha, Qatar

AI Risk Summit 2025 Sessionize Event

August 2025 Half Moon Bay, California, United States

AI DevSummit 2025 Sessionize Event

May 2025 South San Francisco, California, United States

cackalackycon Sessionize Event

May 2025 Durham, North Carolina, United States

STL TechWeek 2025 Sessionize Event

March 2025 St. Louis, Missouri, United States

SecjuiceCon 2025 Virtual Conference Sessionize Event

March 2025

NVIDIA GTC 2025

Developers attend GTC to build new technical skills, connect with peers, and learn from leaders in their fields. From hands-on training to in-depth technical sessions, GTC provides a unique environment for advancing your expertise, tackling real-world challenges, and staying competitive in a rapidly evolving industry.

Business leaders recognize the significant impact AI is having on industries, from improved customer experiences to product innovation. With a broad range of sessions and networking opportunities, GTC delivers exclusive insights that can help your company thrive in the era of AI.

March 2025 San Jose, California, United States

Apres-Cyber Slopes Summit 2025 Sessionize Event

March 2025 Park City, Utah, United States

Devnexus 2025 Sessionize Event

March 2025 Atlanta, Georgia, United States

DeveloperWeek 2025 Sessionize Event

February 2025 Santa Clara, California, United States

ProductWorld 2025 Sessionize Event

February 2025 Santa Clara, California, United States

Prompt Engineering Conference 2024 Sessionize Event

November 2024

MLOps + Generative AI World 2024 Sessionize Event

November 2024 Austin, Texas, United States

Pacific Hackers Conference 2024 Sessionize Event

November 2024 Mountain View, California, United States

AI Summit Vancouver Sessionize Event

November 2024 Vancouver, Canada

2024 All Day DevOps Sessionize Event

October 2024

Cyber Back to School Sessionize Event

October 2024

AI Infra Summit

David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.

September 2024 San Francisco, California, United States

BSides Kraków 2024 Sessionize Event

September 2024 Kraków, Poland

InfoSec Nashville 2024 (Call For Speakers & Workshops) Sessionize Event

September 2024 Nashville, Tennessee, United States

SummerCon

Summercon is one of the oldest hacker conventions, and the longest running such conference in America. It helped set a precedent for more modern “cons” such as H.O.P.E. and DEF CON, although it has remained smaller and more personal. Summercon has been hosted in cities such as Pittsburgh, St. Louis, Atlanta, New York, Washington, D.C., Austin, Las Vegas, and Amsterdam.

July 2024 Brooklyn, New York, United States

AI Risk Summit + CISO Forum Sessionize Event

June 2024 Half Moon Bay, California, United States

David Campbell

Head of AI Security at Scale, AI

Boston, Massachusetts, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top