Speaker

Aasne Holtklimpen

Aasne Holtklimpen

Senior Architect - Microsoft AI and Compliance Lead | Microsoft Copilot MVP | Microsoft MCT

Kristiansand, Norway

Actions

Åsne Holtklimpen bridges the gap between technical operations and business strategy.
As a Senior Architect at Crayon (soon to be Software One), she is leading the work on helping enterprises mitigate risk while deploying advanced cloud solutions.

Recognized as one of the 50 Top Women in Technology in 2022, Åsne combines deep technical expertise with a strong community presence.
Her expertise centers on the architectural realities of the Microsoft ecosystem, specifically Microsoft Entra, Purview, and M365 collaboration tools. Åsne moves beyond the hype of AI to address the critical governance required for Microsoft Copilot, focusing on data hygiene, lifecycle management, and access control.

A recognized voice in the global community and a Microsoft MVP, Åsne ensures that organizations can leverage intelligent tools safely and effectively, transforming complex security requirements into actionable business value.

Badges

Area of Expertise

  • Information & Communications Technology

Topics

  • Microsoft Purview
  • M365 Copilot
  • Microsoft 365
  • Microsoft Teams
  • Microsoft SharePoint
  • Microsoft Viva
  • Copilot for M365
  • Microsoft Security
  • Data Security
  • Insider Risk
  • Data Security and Governance
  • Data Security and Privacy in M365
  • Identity and Access Management
  • Power Platform Governance

Urgent access without security compromise!

Imagine this: Your Lead Architect is on holiday abroad. A major incident hits your home base, requiring their immediate intervention on a highly sensitive project. The clock is ticking. Do you compromise your security standards to let them in, or keep the door locked and risk the project?

You don't have to choose.

Join Åsne Holtklimpen and Per-Torben Sørensen as they reveal how to solve the "Security vs. Availability" paradox. Learn how to leverage the integration of Microsoft Entra and Microsoft Purview to create a dynamic access solution that adapts to crises without lowering the drawbridge.

Key takeaways include:

- Dynamic Geo-Filtering: Configuring Conditional Access to adapt to user location without exposing the environment.
- Label-Based Logic: utilizing Sensitivity Labels as the trigger for granular access controls.
- Global Governance: Implementing Microsoft Information Protection (MIP) to secure data assets everywhere, from the corporate office to a hotel Wi-Fi.

Wild ideas, safe agents

Innovation needs freedom. Enterprise AI needs control. The trick is getting both without strangling creativity or opening the door to chaos.

This session shows you how to let your Copilot Studio agents explore bold ideas while keeping them grounded in safe, predictable behavior. We’ll walk through the full lifecycle of an agent, from early experimentation to disciplined, compliant execution, and show you how to turn “wild ideas” into reliable business processes.

You’ll learn how to design conversational agents with built‑in guardrails, how to set up “electric fences” that keep users on safe paths, and how to distinguish a creative hallucination from a useful prediction. We’ll also look at Agent 365 and Entra Agent ID to understand how identity, access, and authorization shape what your agents can (and cannot) do.

Come curious. Leave with the patterns to build agents that are creative, safe, and enterprise‑ready.

Stop the data chaos before AI amplifies it

AI adoption is accelerating, but most organizations are building on top of a data estate they don’t fully understand. Before Copilot for Microsoft 365 can deliver value, you have to confront the uncomfortable truth: years of unmanaged, sensitive data‑at‑rest are now a business‑critical risk.

This session bridges strategy and execution. We’ll show how Microsoft Purview and Defender for Cloud Apps can serve as more than tools — they become the backbone of a repeatable, scalable governance framework. You’ll learn how to identify data hotspots, quantify the cost and risk of inaction, and build a governance lifecycle that keeps SharePoint and Dataverse under control long after Copilot goes live.

If you want AI to be safe, predictable, and cost‑effective, you must secure the source first.

Assumed Knowledge
- Familiarity with Microsoft Purview and Defender for Cloud Apps.
- Understanding of SharePoint and Dataverse data structures.
- Basic awareness of AI‑related data risks and governance concepts.

Practical Takeaways
- The Business Case: Why data discovery is non‑negotiable in the age of AI.
- Governance Lifecycle: How to build a repeatable framework for SharePoint and Dataverse.
- DSPM ROI: Understanding the cost, impact, and return on Data Security Posture Management.

Out of Scope
- Copilot deployment steps or feature demos.
- Sensitivity label basics or introductory governance theory.
- Non‑Microsoft governance or DSPM tooling.

Purview strategies for when your data has no boundaries.

Some organisations have a “sharing culture.” Others have a “sharing problem.” If your files have been circulating more than office gossip, this session is your intervention.

We take a candid look at what really happens when data grows up without boundaries: accidental access, forgotten permissions, and sensitive files living rent‑free in places they were never meant to be.

Using Microsoft Purview as the backbone, we’ll show you how to stop the chaos with smarter discovery, automated classification and guardrails that actually work, even when humans don’t.
Expect a fast, honest session that cuts through wishful thinking and focuses on actionable tactics. No scare tactics, no theory. Just real problems and its real solutions.

Key takeaways
- How to spot oversharing patterns before they become security incidents
- How to use Purview’s advanced discovery to expose sensitive‑data hotspots
- How to automate classification so your data stops behaving like an unsupervised intern
- How to establish healthy access boundaries without fighting endless permission wars
- A practical detox plan to keep your data clean, controlled and calm, long after the session ends

Tactical blueprint for AI‑ready data

AI doesn’t fail because of bad prompts, it fails because your data isn’t ready. For most organizations, the biggest threat to a safe Copilot rollout is years of unchecked “over‑sharing” buried inside SharePoint and OneDrive. If your permissions are a mess, Copilot will surface it instantly.

This session gives you a clear, tactical blueprint for building a secure data foundation before AI starts indexing anything. We’ll show how Microsoft Purview and Defender for Cloud Apps work together to expose sensitive data piles, validate whether “private” is actually private, and enforce labeling at scale. No theory. No governance poetry. Just the structural work your environment needs before Copilot or other AI driven tools arrives.

If you want AI to respect your boundaries, you need to build boundaries worth respecting.

Stop the data chaos before AI amplifies it

Copilot doesn’t create new risks, it exposes the ones you’ve been ignoring. The moment it starts indexing, every overshared folder, every forgotten SharePoint site, and every unlabeled file becomes a spotlight moment. If your environment is full of sensitive “dark data,” Copilot will surface it faster than any attacker ever could.

This session is your Pre‑Copilot Clean‑up crash kit. We skip the theory and go straight to the work: using Microsoft Purview and Defender for Cloud Apps to pinpoint data hotspots, prioritize what actually matters, and shut down exposure paths before Copilot shines a light on them. Expect fast, surgical tactics you can deploy the second you get back to your tenant.

You’ll leave with a repeatable, high‑impact playbook for identifying, quarantining, and labeling sensitive information, turning your Copilot rollout from a liability into a controlled, predictable win.

Key takeaways:
The Triage List: The data types that will blow up your Copilot rollout if you don’t fix them first.

Search & Destroy: How to use Content Search to uncover and remediate overshared or high‑risk locations at speed.

Automated Guardrails: Auto‑labeling and policy‑based protections that keep cleaning up your environment long after you’ve logged off.

Governance and Operational Control in the Era of AI Agents

The integration of automated AI agents and Microsoft Copilot represents a significant shift in operational efficiency. However, as AI tools increasingly read, interpret, and act upon data from Teams, Outlook, and SharePoint, organizations face new challenges regarding data control and decision-making authority.

This session addresses the governance implications of deploying intelligent assistants as "digital colleagues." We will examine the critical balance between efficiency and oversight, focusing on:

- Data Access Boundaries: Managing what Copilot can access and interpret.
- Automated Decision Making: Establishing guardrails for AI-driven actions.
- Information Lifecycle: Ensuring data hygiene in connected M365 workloads.

Join this session to learn frameworks for maintaining organizational control while maximizing the productivity benefits of Microsoft Copilot.

Data Readiness and Security Prerequisites for Microsoft Copilot

Assigning Copilot licenses is easy. Making sure Copilot doesn’t expose the wrong data is not. Before AI can safely operate in your tenant, you need a data estate that won’t betray you the moment Copilot starts indexing.

This session cuts through the noise and focuses on the architectural and compliance groundwork required for a secure Copilot deployment. We’ll walk through how to classify and protect sensitive data with Microsoft Purview, how to eliminate years of permission “technical debt,” and how to align AI usage with GDPR and internal security policies. We’ll also cover the human side — how to train users to interact with AI safely without slowing them down.

You’ll leave with a clear, actionable plan for preparing your data environment so Copilot delivers value without creating new risks.

Assumed Knowledge
- Familiarity with Microsoft Purview and sensitivity labeling concepts.
- Understanding of SharePoint/OneDrive permission models and common misconfigurations.
- Basic awareness of GDPR and internal compliance frameworks.
- Experience with organizational change or user‑training programs is helpful.

Practical Takeaways
Data Classification: How to use Microsoft Purview to label and protect sensitive assets before Copilot touches them.
Access Management: How to identify and remediate permission debt that leads to oversharing.
Compliance Strategy: Aligning AI usage with GDPR and internal security policies.
User Adoption: Training patterns that help users work safely with AI without slowing innovation.

Out of Scope
- Copilot licensing, setup, or feature demos.
- Sensitivity label basics or introductory governance theory.
- Non‑Microsoft compliance or DLP tooling.

How to Architect Zero Trust AI

You’ve deployed M365 Copilot. Licenses are assigned. Excitement is high, and so is the risk. Once the “toy phase” ends, the real question hits: how do you operationalize AI inside a verified Zero Trust architecture?

This session skips the basics and goes straight to the engineering work. We’ll show how to automate label application using trainable classifiers so Copilot only touches what it should. We’ll break down the Semantic Index and how to influence what it surfaces. And we’ll go deeper into the identity layer, examining Agent 365 and Entra ID’s Agent ID to understand how AI agents authenticate, authorize, and request access in a Zero Trust world.

Finally, we’ll look at how to implement Just‑in‑Time access for AI agents so elevated permissions aren’t left lying around for Copilot (or anyone else) to misuse.

If you want Copilot to behave like a Zero Trust citizen, you need to architect it that way. This session shows you how.

Target Audience: Enterprise Architects & Security Leads who are done with the basics and ready to harden their AI environment.

Shadow AI: Find It. Control It.

Your users bypassed your Copilot restrictions before lunch on day one. They’re already using Shadow AI, unapproved LLMs, browser‑based PDF analyzers, and whatever tool TikTok recommended this week. Blocking them kills productivity; ignoring them creates a data‑leak buffet.

This session gives you the tactical patterns for Safe AI Adoption in the real world. We’ll show how to use Defender for Cloud Apps to expose Shadow AI usage, how Endpoint DLP can sanitize sensitive data before it leaves the device, and how to govern “Bring Your Own Model” scenarios in Azure without becoming the Department of No.

Shadow AI isn’t going away. Time to stop fighting the tide and start engineering the dam.

Solving the Security-Availability Paradox

In cybersecurity, airtight access controls often collide with real‑world emergencies. When key personnel are off‑grid, abroad, or simply unreachable during a critical incident, organizations face an impossible choice: weaken security for speed, or risk operational failure.

This session introduces a third path: context‑aware resilience.

Åsne Holtklimpen and Per‑Torben Sørensen will break down how Microsoft Entra and Microsoft Purview can work together to create a modern, data‑centric “Break Glass” model. Instead of relying on static perimeters or brittle emergency accounts, you’ll learn how to design access that adapts to user context, data sensitivity, and real‑time risk signals.

If you want emergency access that’s fast and secure, this is the architecture that makes it possible.

Workplace Ninjas Norway 2026 Sessionize Event Upcoming

May 2026 Oslo, Norway

NIC Rebel Edition Sessionize Event

October 2025 Oslo, Norway

MVP-Dagen 2025 Sessionize Event

October 2025 Oslo, Norway

NIC Empower 2024 Sessionize Event

November 2024 Oslo, Norway

MVP-Dagen(e) 2024 Sessionize Event

October 2024 Oslo, Norway

Aasne Holtklimpen

Senior Architect - Microsoft AI and Compliance Lead | Microsoft Copilot MVP | Microsoft MCT

Kristiansand, Norway

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top