Session

What should your Acceptable and Responsible AI Use Policy Guide look like?

As AI tools like Copilot move from experimentation to everyday use, many organizations are realizing the same thing: having “guidelines” isn’t the same as having a usable AI policy.
In this session, I will break down what an effective Acceptable and Responsible AI Use Policy Guide should actually look like: one that employees can understand, leaders can defend, and the organization can realistically enforce.
Rather than focusing on abstract principles, this session takes a practical, scenario‑driven approach.

We’ll explore:
-What belongs in an AI use policy and what doesn’t
-How to balance innovation with risk, compliance, and trust
-The difference between “responsible AI principles” and day‑to‑day acceptable use guidance
-How to address gray areas like data sensitivity, human oversight, and AI‑generated content ownership

You’ll also see how successful organizations structure their policies to guide behavior without slowing teams down, and how these policies evolve as AI capabilities mature.

Whether you’re just starting your AI governance journey or refining an existing policy, you’ll leave with a clear framework and checklist you can adapt for your own organization without starting from a blank page.

Ideal for: IT leaders, security and compliance professionals, architects, Power Platform and M365 practitioners, and anyone responsible for governing AI use at scale.

AJ Ansari

Microsoft MVP (Copilot Studio & Business Central), Digital Media Publisher, Podcast Host, C-Suite Executive

Houston, Texas, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top