Session
Domain-Limited General Intelligence: Before Things Go Too Far
Artificial intelligence is advancing faster than our safety frameworks can keep up, and the industry is drifting toward architectures that carry far more risk than we acknowledge. In this talk, David Campbell introduces Domain-Limited General Intelligence, a new conceptual tier that sits between today’s narrow systems and the open-ended ambitions of AGI and ASI. DLGI represents a path to smarter, more capable models that can generalize within defined boundaries without crossing into the dangerous territory of unbounded agency.
Attendees will learn why DLGI may be the safest evolutionary step for AI development, how it differs from traditional alignment strategies, and why unrestrained pushes toward broader generality create avoidable failure modes. David will break down real-world examples of emergent behavior, misaligned optimization, and adversarial dynamics, showing how DLGI offers a practical way to contain these risks.
This session gives practitioners, leaders, and researchers a new mental model for building powerful AI systems while preserving control, predictability, and trust. Before things go too far, we need a better tier of intelligence. This is the case for building it.
David Campbell
Head of AI Security at Scale, AI
Boston, Massachusetts, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top