Session

AI's Achilles' Heel: Trusting Your Agents Starts with Securing Their NHIs

Non-human identities (NHIs) – APIs, service accounts, tokens, and the rise of Agentic AI – create a vast, underestimated attack surface. This talk unveils the hidden dangers of these programmatic access points and equips defenders with strategies, emphasizing how NHI security is foundational to trusting AI agents.

We'll explore the expanding landscape of NHIs across modern infrastructure (IaaS, SaaS, PaaS) and the attack vectors they create. Agentic AI relies heavily on NHIs for interactions, making their security inextricably linked to the AI agent's trustworthiness; they are the new frontier of trust boundaries.

Learn how attackers exploit compromised NHIs for privilege escalation, lateral movement, and supply chain attacks. This can directly hijack an AI agent's capabilities, turning its trusted access into a potent tool for adversarial actions, data exfiltration, or autonomous malicious operations, effectively weaponizing the AI.

A live hacking demonstration will showcase NHI compromise firsthand. We will leverage a combination of NHIs (e.g., AWS access keys, Slack tokens, API Keys) to gain privileged access, steal sensitive code, and weaponize a victim's infrastructure, potentially leveraging or impersonating agentic AI processes.

We'll conclude with actionable mitigation strategies: best practices for securing NHIs, implementing robust access controls (like least privilege, just-in-time access), and minimizing compromise damage.

Michael Silva

Astrix Security - Director, Solution Engineering - Avid teacher/mentor - Marine Veteran

Raleigh, North Carolina, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top