Session

How to NOT make the headlines! Keeping your data out of public AI tools

Recent news highlights a recurring risk: sensitive personal data accidentally leaking into public AI tools. The question for organizations is no longer whether employees will use AI, but how to facilitate it safely. This session explores the technical and human barriers required to keep critical data within your organizational boundaries. This workshop is packed full of demos showing you how to implement the methods required to keep your data safe, and what the employees experience while using those AI tools.

Key Takeaways
Technical Guardrails: Using Microsoft Purview and Microsoft Defender for Cloud Apps to block sensitive data uploads to untrusted LLMs.
The Human Element: Strategies for educating employees on data placement, specifically why certain information belongs in Microsoft 365 Copilot but not in public AI applications.
Balanced Governance: Maintaining data control without hindering innovation.

Edine Olijve-Watkinson

Microsoft 365 MVP - AI Consultant at InSpark

Hoorn, The Netherlands

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top