Session

How to Break an AI (Before It Breaks You): PyRIT your Red Teaming Agent

AI systems are powerful—but power without safety is a risk. In this session, we’ll deep dive into PyRIT a new AI Red Teaming Agent, a cutting-edge tool designed to simulate adversarial attacks on generative AI systems. You’ll learn how big IT operationalizes red teaming at scale using curated prompts, PyRIT (open source) attack strategies, and automated safety scans to uncover vulnerabilities before they reach production.

We’ll explore:

- What makes AI red teaming different from traditional security testing
- How the Red Teaming Agent works inside Azure AI Foundry and other LLM
- Real-world examples of jailbreaks, prompt injections, and harmful outputs
- How to build your own red teaming workflows using Python

Whether you're a security engineer, AI developer, or just curious about how to stress-test your models, this session will equip you with practical insights and tools to make your AI safer, more trustworthy, and resilient.

Taswar Bhatti

Microsoft Lead AI & Security Cloud Solutions Architect

Istanbul, Turkey

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top