Most Active Speaker

David Campbell

David Campbell

Head of AI Security at Scale, AI

Boston, Massachusetts, United States

Actions

David Campbell is Head of AI Security at Scale AI, where he architects and leads some of the most advanced AI red teaming and resilience programs in the world. A two-decade veteran of Silicon Valley, David built his career at the intersection of infrastructure, security, and developer experience before becoming one of the industry’s most trusted voices on AI risk.

He pioneered Discovery, one of the first and largest large-scale AI Red Teaming platforms deployed across governments and Fortune 100 companies. His work has shaped how organizations probe, stress-test, and harden generative models against real adversaries, operational failures, and emergent behavior.

David’s expertise is sought globally. He has briefed the U.S. Congress, the White House, U.K. Parliament, NATO, Korea AISI, UK AISI, Qatar NCSA, and senior public-sector leaders on AI risk, misuse, and national resilience. He is a founding member of OWASP AIVSS and a core member of AIUC-1, the industry consortium defining how enterprises evaluate, insure, and underwrite AI systems. He also contributes to collaborative defense efforts such as CISA’s JCDC.AI series on AI-driven cyber threats.

Before Scale, David shaped platform security and engineering culture at Uber, DoorDash, and Nest Labs, where he built systems and practices that scaled to thousands of engineers. Across his career, he has been recognized for turning fragmented engineering environments into resilient, high-trust, high-standards organizations.

Today, David helps companies bridge the gap between rapid AI innovation and enterprise-grade safety. His mission is simple: align technology with the future of the business, and ensure that AI systems are deployed responsibly, securely, and with confidence.

Badges

  • Most Active Speaker 2024

Area of Expertise

  • Business & Management
  • Finance & Banking
  • Government, Social Sector & Education
  • Information & Communications Technology
  • Law & Regulation

Topics

  • Responsible AI
  • Responsible AI Principles
  • LLMs
  • cybersecurity
  • AI and Cybersecurity
  • Artificial Inteligence
  • Artificial Intelligence and Machine Learning for Cybersecurity
  • AI Risk
  • AI Research
  • Platform Engineering
  • prompt engineering
  • prompt patterns
  • Red Teaming
  • Red Team
  • AI Red Teaming
  • Adversarial AI
  • Cyber Security basics
  • cyber security
  • cybersecurity awareness
  • Cybersecurity Strategy
  • Emerging Cybersecurity Topics
  • Artificial Intelligence and machine learning
  • Artificial Intelligence
  • artificial intelligence security
  • AI Security
  • AI Safety
  • AI risk management
  • artificial intelligence risk
  • Engineering Excellence
  • Engineering Culture
  • Engineering Culture & Leadership
  • Risk
  • Banking Technology
  • Industrial and Regulated Environment
  • Cybersecurity Governance and Risk Management
  • Risk Management
  • Governance risk and compliance
  • Risk Assessments
  • Cybersecurity Risk Management
  • Risk Mitigation
  • AI Agents
  • AI Bias
  • AI in Health
  • AI Ethics
  • AI in Banking
  • AI in Supply Chain
  • SAP supply chain
  • chief AI officer
  • AI in Government
  • AI in Finance
  • AI IN EDUCATION
  • AI in Healthcare
  • Ai in business
  • Cybersecurity Regulations and Compliance
  • Existential Risk
  • AI governance and regulatory compliance
  • Information Security Governance and Risk
  • supply chain risk Management
  • Third Party Risk Management
  • Vendor Risk Management
  • IT Risk Management
  • Identity
  • Identity and Access Management
  • Identity Management
  • identity & authentication
  • Identity Governance
  • AI Safety institute
  • Research & Development
  • Research in Information Technology
  • AI Readiness
  • AI Reliability Engineering

Beyond Red Teaming: The Failures of AI Security and the Path Forward

AI security is failing in ways we can’t afford to ignore. While AI Red Teaming has been a crucial tool for identifying vulnerabilities, it’s only one piece of the puzzle. The attack surface for AI systems is expanding from adversarial ML exploits to supply chain attacks, model theft, and prompt injection. Meanwhile, security defenses remain reactive, fragmented, and often ineffective against emerging threats.

In this talk, we’ll go beyond red teaming to examine the fundamental failures of AI security—where defenses break down, why current approaches fall short, and what it will take to build truly resilient AI systems. We’ll explore bleeding-edge AI security research, including MITRE ATLAS, the OWASP LLM Top 10, adversarial defenses, and AI supply chain integrity. We’ll also introduce an AI Security Maturity Model, a framework that helps organizations move from ad-hoc defenses to proactive, scalable security strategies.

Along the way, we’ll highlight key breakthroughs—including Scale AI’s own innovations like J2 (Jailbreaking to Jailbreak), which uses AI to red team AI—and discuss what it takes to stay ahead of threats in an AI-driven world. Whether you're a security professional, researcher, or executive, you’ll walk away with actionable insights to fix the gaps in AI security before attackers do.

Ignore Previous Instructions: Embracing AI Red Teaming

In this talk, we will explore the journey of Red Teaming from its origins to its transformation into AI Red Teaming, highlighting its pivotal role in shaping the future of Large Language Models (LLMs) and beyond. Drawing from my firsthand experiences developing and deploying the largest generative red teaming platform to date, I will share insightful anecdotes and real-world examples. We will explore how adversarial red teaming fortifies AI applications at every layer—protecting platforms, businesses, and consumers. This includes safeguarding the external application interface, reinforcing LLM guardrails, and enhancing the security of the LLMs' internal algorithms. Join me as we uncover the critical importance of adversarial strategies in securing the AI landscape.

Building Bulletproof Products: Embracing AI Red Teaming

In this talk, we’ll trace the evolution of Red Teaming, from its traditional roots to the cutting-edge practice of AI Red Teaming, demonstrating how it’s shaping the future of Large Language Models (LLMs) and other AI technologies. Drawing from my experience leading the development and deployment of the largest generative red teaming platform to date, I’ll share compelling real-world examples and personal insights. We’ll dive into how adversarial red teaming strengthens AI systems across all layers—protecting platforms, businesses, and consumers alike. From securing external application interfaces to fortifying LLM guardrails and improving the internal security of AI algorithms, we’ll explore the critical role of adversarial strategies in safeguarding the evolving AI landscape.

AI Red-Teaming, Hallucinations, & Risk

David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.

The Real Last Mile: De-Risking Generative AI in Production

While moving generative AI proofs of concept to production is often considered a key final phase in the journey to achieving results, efficiently assessing and mitigating ongoing security, safety, and performance issues present additional hills to climb. In this round of lightning talks, hear from three innovative companies tackling real-time gen AI risk assessment and mitigation. Through demos and real-world examples, you’ll understand tools and best practices for programmatically addressing LLM application vulnerabilities across the development life cycle.

Domain-Limited General Intelligence: Before Things Go Too Far

Artificial intelligence is advancing faster than our safety frameworks can keep up, and the industry is drifting toward architectures that carry far more risk than we acknowledge. In this talk, David Campbell introduces Domain-Limited General Intelligence, a new conceptual tier that sits between today’s narrow systems and the open-ended ambitions of AGI and ASI. DLGI represents a path to smarter, more capable models that can generalize within defined boundaries without crossing into the dangerous territory of unbounded agency.

Attendees will learn why DLGI may be the safest evolutionary step for AI development, how it differs from traditional alignment strategies, and why unrestrained pushes toward broader generality create avoidable failure modes. David will break down real-world examples of emergent behavior, misaligned optimization, and adversarial dynamics, showing how DLGI offers a practical way to contain these risks.

This session gives practitioners, leaders, and researchers a new mental model for building powerful AI systems while preserving control, predictability, and trust. Before things go too far, we need a better tier of intelligence. This is the case for building it.

Agile + DevOpsDays Des Moines 2025 Sessionize Event

October 2025 Des Moines, Iowa, United States

4th Annual Cyber Governance & Assurance Conference

Lead the AI Red Teaming Workhop and CTF teaching people about how and why AI Red Teaming is important.

September 2025 Doha, Qatar

AI Risk Summit 2025 Sessionize Event

August 2025 Half Moon Bay, California, United States

AI DevSummit 2025 Sessionize Event

May 2025 South San Francisco, California, United States

cackalackycon Sessionize Event

May 2025 Durham, North Carolina, United States

STL TechWeek 2025 Sessionize Event

March 2025 St. Louis, Missouri, United States

SecjuiceCon 2025 Virtual Conference Sessionize Event

March 2025

NVIDIA GTC 2025

Developers attend GTC to build new technical skills, connect with peers, and learn from leaders in their fields. From hands-on training to in-depth technical sessions, GTC provides a unique environment for advancing your expertise, tackling real-world challenges, and staying competitive in a rapidly evolving industry.

Business leaders recognize the significant impact AI is having on industries, from improved customer experiences to product innovation. With a broad range of sessions and networking opportunities, GTC delivers exclusive insights that can help your company thrive in the era of AI.

March 2025 San Jose, California, United States

Apres-Cyber Slopes Summit 2025 Sessionize Event

March 2025 Park City, Utah, United States

DeveloperWeek 2025 Sessionize Event

February 2025 Santa Clara, California, United States

ProductWorld 2025 Sessionize Event

February 2025 Santa Clara, California, United States

Prompt Engineering Conference 2024 Sessionize Event

November 2024

MLOps + Generative AI World 2024 Sessionize Event

November 2024 Austin, Texas, United States

Pacific Hackers Conference 2024 Sessionize Event

November 2024 Mountain View, California, United States

AI Summit Vancouver Sessionize Event

November 2024 Vancouver, Canada

2024 All Day DevOps Sessionize Event

October 2024

Cyber Back to School Sessionize Event

October 2024

AI Infra Summit

David Campbell’s presentation at the AI Infra Summit focused on the critical need for AI red teaming to identify vulnerabilities in AI systems before they can be exploited maliciously. He shared examples of AI failures, including instances where AI systems generated dangerous outputs like harmful recipes or illegal instructions.

September 2024 San Francisco, California, United States

BSides Kraków 2024 Sessionize Event

September 2024 Kraków, Poland

InfoSec Nashville 2024 (Call For Speakers & Workshops) Sessionize Event

September 2024 Nashville, Tennessee, United States

SummerCon

Summercon is one of the oldest hacker conventions, and the longest running such conference in America. It helped set a precedent for more modern “cons” such as H.O.P.E. and DEF CON, although it has remained smaller and more personal. Summercon has been hosted in cities such as Pittsburgh, St. Louis, Atlanta, New York, Washington, D.C., Austin, Las Vegas, and Amsterdam.

July 2024 Brooklyn, New York, United States

AI Risk Summit + CISO Forum Sessionize Event

June 2024 Half Moon Bay, California, United States

David Campbell

Head of AI Security at Scale, AI

Boston, Massachusetts, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top