
Michael Silva
Astrix Security - Director, Solution Engineering - Avid teacher/mentor - Marine Veteran
Raleigh, North Carolina, United States
Actions
Michael Silva is a technology leader with 17+ years of experience. Presently, Michael is the Director of Solution Engineering with Astrix Security, the pioneers of non-human identity security. Using the combined expertise of technical and customer facing roles, Michael has developed the ability to relate to customers, understand their pain points, and help define a strategy that will map to successful execution of business requirements.
Before joining Astrix, Michael has been part of taking multiple start-ups from their infancy to acquisition. Most recently he was the Technical Director for a CNAPP (Cloud Native Application Protection Platform) called Lightspin, that was acquired by Cisco. At Lightspin, Michael designed the technical go-to-market strategy, developed strategic partnerships, and helped grow the business from its inception into the U.S. market. Michael has led a variety of teams from customer facing roles at Nutanix and Progress Software (formerly Chef) to technical teams at Cisco and various managed service providers. His knowledge is deeply rooted in public cloud security across all major cloud service providers as well as Kubernetes security.
Aside from professional experience, Michael holds many professional and specialty certifications from AWS, GCP, SANS, and Nutanix, and is a veteran of the U.S. Marine Corps.
Links
Area of Expertise
Topics
Optimizing Identity for Agentic AI: A CTO's Blueprint
The advent of agentic AI heralds a transformative era for enterprises, promising unprecedented automation and efficiency. This leap into autonomous systems, however, brings forth a profound and urgent mandate: the establishment, management, and secure provisioning of non-human intelligent agent identities at scale. This session will provide CTOs with the definitive strategic framework for architecting robust identity programs, purpose-built for the evolving agentic AI landscape.
We will delve into the critical realization that non-human identity (NHI) security is the foundational backbone of effective agentic AI identity and access management. As AI agents gain autonomy and interact across complex digital ecosystems, their identities become the primary control plane for ensuring trust, controlling access, and enforcing governance. This presentation will illuminate how common identity challenges historically associated with human users – such as credential sprawl, privilege escalation, and anomalous behavior detection – will now manifest uniquely as non-human identity issues, necessitating entirely new paradigms for their detection, rapid response, and comprehensive lifecycle management.
A core focus will be on the imperative of managing identities for their behavior, posture, and usage at a new, massive scale driven by these agents. We will dissect why traditional identity governance models are inherently insufficient for the dynamic, high-volume interactions characteristic of AI agents. Attendees will gain insight into advanced techniques for continuous behavioral analytics, real-time posture validation, and the implementation of fine-grained access controls that adapt dynamically to agents' evolving roles and functions. Critically, we will define what success truly entails for an identity program engineered to support agentic AI, outlining the key metrics, essential architectural principles, and strategic considerations required to ensure your autonomous systems operate securely, efficiently, and compliantly. CTOs will depart this session with a clear understanding of organizational ownership for this critical domain and an actionable blueprint to implement within their enterprise.
AI's Achilles' Heel: Trusting Your Agents Starts with Securing Their NHIs
Non-human identities (NHIs) – APIs, service accounts, tokens, and the rise of Agentic AI – create a vast, underestimated attack surface. This talk unveils the hidden dangers of these programmatic access points and equips defenders with strategies, emphasizing how NHI security is foundational to trusting AI agents.
We'll explore the expanding landscape of NHIs across modern infrastructure (IaaS, SaaS, PaaS) and the attack vectors they create. Agentic AI relies heavily on NHIs for interactions, making their security inextricably linked to the AI agent's trustworthiness; they are the new frontier of trust boundaries.
Learn how attackers exploit compromised NHIs for privilege escalation, lateral movement, and supply chain attacks. This can directly hijack an AI agent's capabilities, turning its trusted access into a potent tool for adversarial actions, data exfiltration, or autonomous malicious operations, effectively weaponizing the AI.
A live hacking demonstration will showcase NHI compromise firsthand. We will leverage a combination of NHIs (e.g., AWS access keys, Slack tokens, API Keys) to gain privileged access, steal sensitive code, and weaponize a victim's infrastructure, potentially leveraging or impersonating agentic AI processes.
We'll conclude with actionable mitigation strategies: best practices for securing NHIs, implementing robust access controls (like least privilege, just-in-time access), and minimizing compromise damage.
The Untapped Power of Non-Human Identities: Your Gateway to Modern Mayhem
The explosion of Non-Human Identities (NHIs) – APIs, service accounts, tokens, and the burgeoning realm of Agentic AI – has created a sprawling, often unguarded landscape ripe for exploitation. Forget password sprays and phishing campaigns; the real juicy targets lie within these programmatic access points. This talk peels back the layers of NHI security (or lack thereof) and reveals how you, the offensive security expert, can leverage these overlooked identities to achieve deep access, lateral movement, and ultimately, complete compromise. We'll explore how the absence of a mature NHI security program is your greatest ally in modern red teaming and real-world attacks.
We'll dissect the inherent weaknesses stemming from the lack of focus on NHI security, directly mirroring the vulnerabilities highlighted in the OWASP Top 10 for Non-Human Identities (NHI:2025). Think about it: Improper Offboarding (NHI1) leaves dormant keys and tokens scattered like breadcrumbs. Secret Leakage (NHI2) in code, logs, and configurations is the low-hanging fruit you've been waiting for. Overprivileged NHIs (NHI5) grant immediate god-like access, and Insecure Cloud Deployments (NHI6) expose sensitive credentials. The emergence of Agentic AI amplifies these opportunities exponentially. These autonomous systems, operating with increasing authority, rely entirely on NHIs. Compromise the AI agent's underlying credentials, and you've effectively weaponized the AI itself.
Building a Future-Proof NHI Security Program for Agentic AI
The explosion of Non-Human Identities (NHIs) – from APIs to the autonomous Agentic AI systems – creates a critical, underestimated attack surface. This talk, tailored for OWASP, will show you how to build a robust NHI security program that not only tackles these risks but also securely supports the integration of Agentic AI.
We'll dive into the OWASP Top 10 for Non-Human Identities (NHI:2025), covering risks like improper offboarding, secret leakage, overprivileged NHIs, and insecure cloud configurations. As AI agents become your digital workforce, their security directly hinges on their underlying NHIs. We'll demonstrate how attackers exploit these OWASP NHI vulnerabilities to escalate privileges, move laterally, and orchestrate supply chain attacks, ultimately weaponizing your AI agents for malicious operations and data exfiltration.
We'll also illustrate how NHI compromises directly map to familiar MITRE ATT&CK framework tactics, drawing parallels to attacks on human identities. This includes Initial Access, Persistence, Privilege Escalation, Credential Access, Lateral Movement, and Exfiltration, helping you align your NHI defenses with existing threat intelligence.
The session will conclude with actionable strategies for building your NHI security program. Learn best practices for robust lifecycle management, least privilege enforcement, secure authentication, and continuous monitoring. These foundational defenses are crucial for securing both traditional systems and ensuring the trustworthiness of your burgeoning Agentic AI deployments, preventing them from becoming unforeseen attack vectors.
AI's Achilles' Heel: Why Trusting Your Agents Starts with Securing Their NHIs
Talk Abstract:
The proliferation of non-human identities (NHIs) – APIs, service accounts, tokens, and the rapid emergence of Agentic AI, where autonomous systems operate with increasing authority – has introduced a vast and often underestimated attack surface. This talk unveils the hidden dangers inherent in these programmatic access points and equips defenders with the knowledge and strategies to combat them, highlighting how securing NHIs is foundational to trusting your AI agents.
Content:
We'll dive into the expanding landscape of NHIs, exploring how they are used across modern infrastructure (IaaS, SaaS, PaaS, etc.) and the attack vectors they inherently create. Crucially, the emerging paradigm of agentic AI, where autonomous agents perform complex tasks, relies almost entirely on these NHIs to interact with services, data, and other systems. The security of the AI agent is therefore inextricably linked to the security of its underlying NHIs, making them the new frontier of trust boundaries.
Learn how attackers exploit compromised NHIs to escalate privileges, move laterally within environments, and orchestrate devastating supply chain attacks. This exploitation can directly hijack the capabilities of an AI agent, turning its trusted access into a potent tool for adversarial actions, data exfiltration, or even autonomous malicious operations, effectively weaponizing the AI itself.
Live Hacking Demonstration: Witness the power of NHI compromise firsthand. This talk showcases a captivating live demonstration where we will leverage a combination of NHIs (e.g., AWS access keys, Slack tokens, API Keys) to gain privileged access, steal sensitive code, and weaponize a victim's own infrastructure, potentially leveraging or impersonating agentic AI processes in the process.
Empowering the Defense: We'll conclude by offering actionable strategies for mitigating the NHI attack surface. Learn best practices for securing NHIs, implementing robust access controls, and minimizing the damage from a potential compromise. These foundational defense strategies are paramount not only for traditional systems but are especially critical for ensuring the trustworthiness and integrity of your burgeoning agentic AI deployments, preventing them from becoming unforeseen attack vectors.
Security BSides Cayman Islands 2025 Sessionize Event Upcoming

Michael Silva
Astrix Security - Director, Solution Engineering - Avid teacher/mentor - Marine Veteran
Raleigh, North Carolina, United States
Links
Actions
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top