Session
AI's Achilles' Heel: Why Trusting Your Agents Starts with Securing Their NHIs
Talk Abstract:
The proliferation of non-human identities (NHIs) – APIs, service accounts, tokens, and the rapid emergence of Agentic AI, where autonomous systems operate with increasing authority – has introduced a vast and often underestimated attack surface. This talk unveils the hidden dangers inherent in these programmatic access points and equips defenders with the knowledge and strategies to combat them, highlighting how securing NHIs is foundational to trusting your AI agents.
Content:
We'll dive into the expanding landscape of NHIs, exploring how they are used across modern infrastructure (IaaS, SaaS, PaaS, etc.) and the attack vectors they inherently create. Crucially, the emerging paradigm of agentic AI, where autonomous agents perform complex tasks, relies almost entirely on these NHIs to interact with services, data, and other systems. The security of the AI agent is therefore inextricably linked to the security of its underlying NHIs, making them the new frontier of trust boundaries.
Learn how attackers exploit compromised NHIs to escalate privileges, move laterally within environments, and orchestrate devastating supply chain attacks. This exploitation can directly hijack the capabilities of an AI agent, turning its trusted access into a potent tool for adversarial actions, data exfiltration, or even autonomous malicious operations, effectively weaponizing the AI itself.
Live Hacking Demonstration: Witness the power of NHI compromise firsthand. This talk showcases a captivating live demonstration where we will leverage a combination of NHIs (e.g., AWS access keys, Slack tokens, API Keys) to gain privileged access, steal sensitive code, and weaponize a victim's own infrastructure, potentially leveraging or impersonating agentic AI processes in the process.
Empowering the Defense: We'll conclude by offering actionable strategies for mitigating the NHI attack surface. Learn best practices for securing NHIs, implementing robust access controls, and minimizing the damage from a potential compromise. These foundational defense strategies are paramount not only for traditional systems but are especially critical for ensuring the trustworthiness and integrity of your burgeoning agentic AI deployments, preventing them from becoming unforeseen attack vectors.

Michael Silva
Astrix Security - Director, Solution Engineering - Avid teacher/mentor - Marine Veteran
Raleigh, North Carolina, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top