Session

When AI Talks to Your Data: Security Risks and Hidden Access

AI agents that interact with enterprise environments often run using the permissions of the user or service account that invokes them. While this improves productivity, it can also expose risks that many teams underestimate.

With AI it becomes easy to run many commands, explore configurations, and inspect resources far faster than a human normally would. This can reveal credentials, connections, or access paths that were never intended to be visible.

In this session we explore how things can go wrong in environments built on Microsoft Azure using services such as Azure OpenAI Service, Azure Storage, Azure Key Vault, and Azure SQL Database.

This session is demo heavy and focuses on real scenarios. We will show how AI can help identify common misconfigurations and security gaps, and how those discoveries can lead users to systems or data they did not even realize they had access to.

Through live demonstrations we will explore:

Discovering credentials or secrets stored in unexpected places

Exploring storage and configuration files at scale

Revealing hidden connections to databases or services

Identifying access paths that expose higher privileged accounts

The goal is to understand how these situations happen and how to design safer environments using least privilege, proper secret management, and stronger boundaries around AI-driven automation.

Danny Kruge

Data Platform MVP | Mission Critical Engineer from Schuberg Philis

Laren, The Netherlands

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top