Cynthia Akiotu
Microsoft MVP| Cybersecurity Architect | Global Speaker | Bridging AI Security for a Zero Trust Future I Accenture
Manchester, United Kingdom
Actions
Cynthia Akiotu is a Cybersecurity Architect and Identity Specialist with deep expertise in Identity & Access Management (IAM), data governance, and Zero Trust implementation. She is a Security Delivery Associate Manager at Accenture.
With experience across diverse sectors, Cynthia has led projects on secure cloud adoption, privileged access management, insider risk mitigation, and identity governance for AI-driven environments. She holds several industry certifications including Microsoft Certified Identity & Access Administrator and Microsoft Certified Information Security Administrator. Her work centres on securing environments where AI agents, autonomous models, and machine identities are rapidly expanding the attack surface.
Cynthia is a regular speaker at global conferences where she delivers insights on AI governance, identity observability, and emerging cybersecurity risks. Her thought leadership extends to academic and industry publications, contributing to works on AI-enabled regulatory compliance and digital skills development in security. Beyond her professional work, Cynthia volunteers her time as Cyberhero Coach Cynthia, delivering cybersecurity awareness to children and promoting safe digital habits in her community. Driven by a commitment to secure innovation, she champions identity governance as a foundation for trust in AI-native ecosystems.
Area of Expertise
Topics
AI Is Listening: Governing Microsoft Security Copilot with Zero Trust
Microsoft Security Copilot is powerful but with great access comes great responsibility.
As AI becomes deeply embedded in security operations, organisations face a new challenge: how do we empower AI without expanding risk, privilege, or exposure?
In this session, I will explore how to secure and govern Microsoft Security Copilot the right way, using Zero Trust principles as the foundation. Drawing from real-world field experience, the talk breaks down the practical guardrails every organisation should implement before enabling Copilot at scale.
You’ll learn how to:
Secure identities and enforce least privilege for Copilot access
Control device trust and session risk
Reduce exposure from plugins, connectors, and third-party data sources
Translate Zero Trust from theory into Copilot-ready configurations
Avoid common permission and governance missteps
This session brings clarity to an emerging governance gap , AI that can reason over your security data must itself be secured, governed, and monitored.
Designed for security architects, analysts, cloud engineers, and AI practitioners, this talk equips you with actionable guidance to deploy Security Copilot confidently, responsibly, and at scale.
AI is listening.
Let’s make sure it’s only hearing what it should.
Securing Access, Smarter: How AI Is Rewriting Conditional Access Governance
Conditional Access policies are central to enforcing Zero Trust principles, yet they often become fragmented and difficult to maintain as organizations scale. Over time, policies may conflict, become redundant, or introduce unnecessary friction for users — all while creating blind spots for security teams.
In this session, I’ll introduce the Conditional Access Optimization Agent, a new capability within Microsoft Security Copilot that leverages AI to support and enhance human decision-making. Rather than replacing IT professionals, this tool acts as a governance ally — providing contextual recommendations, detecting policy overlaps, and simplifying access logic without compromising security.
Attendees will learn:
• The common challenges in scaling and maintaining Conditional Access policies
• How the Optimization Agent helps detect risky configurations and recommends improvements
• A practical demo using natural language queries to evaluate policy logic
• Real-world examples of implementation across healthcare, education, and financial services
• Best practices for integrating AI into your identity governance lifecycle
This session is ideal for security analysts, identity architects, and IT leaders seeking to modernize access governance with intelligent tooling that complements human oversight.
From Productivity to Protection: Governing Microsoft 365 Copilot the Right Way
Microsoft 365 Copilot is already changing the way we work — turning apps like Word, Excel, and Teams into AI assistants. But here’s the challenge: if Copilot isn’t governed with the right guardrails, it can just as easily overshare sensitive data or create compliance headaches.
This session looks at what it takes to keep Copilot both useful and secure. From identity and access policies, to data protection and collaboration settings, to decisions about public web access discover how to avoid common pitfalls and build lasting trust in your deployment.
This isn’t theory or product marketing. It’s a practical conversation about how to unlock Copilot’s productivity while staying in control of your data.
Key Takeaways:
Why Copilot needs governance to deliver safe productivity gains.
Practical steps to prepare and manage your Microsoft 365 environment with Copilot.
How to balance AI innovation with security, compliance, and trust.
Defending at AI Speed: Lessons from Microsoft Security Copilot
Attackers are already using AI. The question is: are you?
AI has permanently changed the cyber battlefield. Security teams are drowned with alerts, risks, and compliance pressures but AI offers a chance to flip the balance of power.
In this session, I will show how defenders can harness AI responsibly, using Microsoft Security Copilot as a real-world case study. We’ll explore:
• How AI is changing the role of the security analyst
• Practical lessons on using AI for faster investigations & threat detection
• The importance of governance and guardrails to keep control
• What leaders must know about trusting AI in defense
This talk blends strategy and practice, giving attendees a clear sense of what “defending at AI speed” means today, and what’s next.
⸻
Target Audience
• Cybersecurity professionals (SOC teams, security managers)
• IT leaders & decision makers exploring AI adoption
• Students, veterans, and career changers looking at the future of cyber defense
⸻
Why Listen
Because AI is already being weaponised by attackers security teams can’t afford to be left behind. I bring hands-on experience with Microsoft Security Copilot, Entra, and Purview, plus global speaking experience, to translate AI innovation into practical steps that organisations of all sizes can use.
Key Takeaways
• A strategic view of how AI is transforming cyber defense
• Real-world lessons from Microsoft Security Copilot
• Why governance is critical when AI assists in defense
• Clear insights for leaders, analysts, and new entrants to cybersecurity
Controlling the Controllers: Mastering Privileged Access in Azure
In any cloud environment, privileged accounts are among the most powerful — and most sought after by attackers. Whether exploited early or later in an attack chain, compromised elevated access can lead to devastating consequences.
This session walks you through how to regain control over privileged roles in Microsoft Azure by applying Zero Trust principles and leveraging tools like Azure AD Privileged Identity Management (PIM), Just-in-Time access, and Role-Based Access Control (RBAC). You’ll learn how to design secure access strategies, monitor activity, enforce governance, and reduce risk without slowing down your teams.
Who’s Accessing What? Data Security & Identity Visibility in Santa’s AI Workshop
It’s the season of sharing but when it comes to data, not everything should end up under the tree! As AI tools multiply faster than Santa’s elves on overtime, keeping track of who (or what) can access sensitive information has never been trickier.
Traditional access controls and DLP were built for humans, yet today’s AI assistants and “agentic” models can pull, remix, and spread data faster than a North Pole rumour.
In this festive session, I will unwrap how to stay in control by combining Data Security Posture Management (DSPM), modern DLP, and identity visibility to ensure your AI elves only touch the right toys. You will learn how to:
🎁 Discover where your critical data truly lives
🧊 Classify and label it effectively
🎅 Apply least privilege and Zero Trust principles that protect both people and AI agents
You will leave with a toolkit to keep your environment merry, bright, and securely auditable empowering innovation while keeping sensitive assets off the naughty list.
AI Is Listening — But Who’s Watching? Securing Microsoft Security Copilot the Right Way
Microsoft Security Copilot is powerful but with great access comes great responsibility.
In this session, I will walk through the critical guardrails that every organisation should apply before deploying Copilot. Drawing from Zero Trust principles, I will explore real-world steps to secure identities, limit privilege, protect devices, and reduce exposure to third-party plugins and data sources.
This session brings clarity to an emerging challenge: how do we empower AI without compromising security or compliance?
Whether you’re a security architect, analyst, or AI enthusiast, you’ll leave with practical takeaways:
• How to translate Zero Trust into Security Copilot-ready configurations
• Common missteps when assigning Security Copilot permissions
• Governance practices for responsible AI use
• A new mindset: design for trust, assume breach, verify always
This talk is grounded in field experience, built to inform and inspire a global audience of practitioners working at the intersection of cloud security, AI, and governance.
AI is listening. Let’s make sure it’s only hearing what it should.
This session is going be in video format
Agent Sprawl Is the New Shadow IT: Securing AI Agents with Microsoft Entra
As organisations rapidly adopt AI agents — from Copilot-style assistants to autonomous workflows — many are deploying them faster than they can govern them. The result is agent sprawl: over-privileged, poorly understood AI agents operating without clear ownership, lifecycle controls, or security oversight.
In this session, we explore why AI agents introduce security challenges that traditional application and user identity models don’t fully address. We’ll examine real enterprise scenarios involving assistive agents, autonomous agents, and agent users, and highlight risks such as permission creep, prompt injection, and lack of auditability.
The session then shows how Microsoft Entra extends identity, conditional access, governance, and Zero Trust principles to AI agents, helping organisations regain visibility and control before agent sprawl becomes the next shadow IT problem.
Attendees will leave with a practical understanding of how to think about AI agent security today — even as the technology continues to evolve.
“Identity Is the New Attack Surface: Securing AI Agents, Models, and Data Pipelines in the AI-Native
In AI-native environments, identity isn’t just about people, it’s about AI agents, models, and automated systems accessing sensitive data, APIs, and cloud resources. The traditional human-centric IAM models don’t cover the dynamic, agent-driven interactions now happening inside AI pipelines.
This talk challenges security leaders, IAM architects, and DevSecOps practitioners to rethink identity and data security as AI shifts from tool to autonomous actor.
I will cover:
How AI agents, fine-tuned models, and autonomous workflows create new identity risks
Machine-to-machine identity management in AI-native environments
Preventing privilege creep, identity sprawl, and shadow access in AI-generated pipelines
Applying Zero Trust and Identity Governance to AI agents and data flows
How to embed Data Access Governance and JIT identity controls in your AI SDLC
Who Should Attend:
IAM & Identity Governance Professionals — to understand AI-driven identity risks
Data Security & Compliance Leaders — to learn how AI-native pipelines impact data exposure
DevSecOps Engineers & Architects — to embed identity and access controls in AI workflows
AI Practitioners & ML Engineers — to grasp the security implications of model access and agent permissions
Risk & Governance Teams — to align AI identity governance with regulatory compliance and Zero Trust strategies
Microsoft 365 UK (M365UK) User Group User group Sessionize Event
CyberSIPs Doncaster 25.2 - The Future with AI Sessionize Event
wedoAI 2025 Sessionize Event
Conf42 Observability 2025
I gave a keynote at the conference titled
Access is the New Breach: Why Observability Must Start with Identity
International webinar on Artificial Intelligence, Big Data and Machine Learning
I gave a keynote titled “The Use of AI and Cybersecurity: Where do we draw the line?’’
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top