Session

AI Red Teaming Room

Step into the AI Red Teaming Room and join experts from Scale AI for an interactive, hands-on experience where you’ll get to play the role of an adversary. In this session, you won’t just learn about AI vulnerabilities — you’ll exploit them. Engage directly in guided exercises designed to expose weaknesses in language models and other AI systems. Try your hand at crafting adversarial prompts to manipulate model behavior, bypass safeguards, and trigger unintended outputs.

Whether you're a security professional, AI researcher, policy expert, or just curious about how AI can go wrong, this is your chance to explore the limits of today's AI systems in a safe, controlled environment. Alongside the red-teaming challenges, you'll learn how these same systems can be defended, evaluated, and improved.

No prior experience with red teaming required — just bring your curiosity. Take 15–20 minutes to stop by, test your skills, and walk away with a deeper understanding of both the power and the fragility of modern AI.

David Campbell

Head of AI Security at Scale, AI

Boston, Massachusetts, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top