Session
Irresponsible AI
Join us as we unleash the chaos of AI agents and tackle the challenge of embedding ethics, diversity, and control into conversational AI platforms! This session dives deep into irresponsible AI agents—showcasing jaw-dropping examples of model failures, adversarial jailbreaks, and ethical landmines that can wreck trust and compliance.
We'll explore Microsoft’s cutting-edge Responsible AI tools, including AI content filters, Red Teaming strategies, and the Responsible AI Dashboard—featuring live demos of AI agents behaving both responsibly and irresponsibly in ways you won’t believe.
🔥 What you’ll walk away with:
✅ How to spot and shut down AI risks before they explode
✅ Deploying AI Content Safety tools to prevent abuse & misinformation
✅ The secret sauce to making AI agents that people (and regulators) trust
Get ready for a session packed with hype, high-stakes AI drama, and must-know strategies to keep your AI on the right side of history. Don’t just build AI—build AI that matters!

Rory Preddy
Microsoft Principal Cloud Advocate
Johannesburg, South Africa
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top