Session

Preventing Data Leaks to Shadow AI: Managing Generative AI Apps in Your Organisation

What if your users are already feeding sensitive data into AI tools you do not even know about?

That question triggered a deeper investigation inside a Microsoft 365 environment. The organisation believed its biggest challenge was planning a safe rollout of Microsoft 365 Copilot. But the discovery phase revealed something far more urgent: dozens of unapproved generative AI apps in active use, sensitive documents pasted into public AI chatbots, and no visibility into what data was flowing where. Shadow AI wasn’t coming, it was already here.

In this session, we follow the journey from first suspicion to full situational awareness and control. You’ll see how Microsoft Defender for Cloud Apps uncovers which AI tools are in use and who’s using them, providing immediate visibility into Shadow AI. Then, you’ll learn how Microsoft Purview Data Security Posture Management maps sensitive data, flags identity oversharing, and pinpoints high-risk locations.

Finally, we bring it all together, showing how to combine Microsoft Purview, Microsoft Defender, Microsoft Entra, and Microsoft Intune to prevent data leaks and govern AI usage across both Microsoft 365 Copilot and third-party generative AI tools.

What you will learn
1. Discover which generative AI apps are in use and what data they access
2. Block user access to unsanctioned AI apps
3. Block sensitive data from being sent to sanctioned AI apps
4. Govern data sent to AI apps in Microsoft Edge

You’ll leave with a practical, blueprint-aligned model to detect
Shadow AI, secure your data, and manage generative AI apps with confidence across your organisation.

Nikki Chapple

Principal Cloud Architect at CloudWay | MVP M365 & Security

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top