Session

Red Teaming AI: How to Stress-Test LLM-Integrated Apps Like an Attacker

It’s not enough to ask if your LLM app is working in production. You need to understand how it fails in a battle-tested environment. In this talk, we’ll dive into red teaming for Gen AI systems: adversarial prompts, model behavior probing, jailbreaks, and novel evasion strategies that mimic real-world threat actors. You’ll learn how to build an AI-specific adversarial testing playbook, simulate misuse scenarios, and embed red teaming into your SDLC. LLMs are unpredictable, but they can be systematically evaluated. We'll explore how to make AI apps testable, repeatable, and secure by design.

Target audience:
- Application security engineers and red teamers
- AI/ML engineers integrating LLMs into apps
- DevSecOps teams building Gen AI pipelines
- Security architects looking to operationalize AI security
- Developers and technical product leads responsible for AI features

Nnenna Ndukwe

Principal Developer Advocate at Qodo AI

Boston, Massachusetts, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top