Session

Securing AI Pipelines: Real-World Attacks on Kubernetes-Based AI Infrastructure

When an ML engineer deploys a Stable Diffusion model to Kubernetes, they unwittingly create an attack surface unlike anything traditional security teams have encountered. I discovered this firsthand after our "perfectly secured" AI cluster was compromised.
In this no-holds-barred session, I'll demonstrate live exploits against common AI deployment patterns, showing how attackers pivot from an innocent model serving endpoint to exfiltrating proprietary models worth millions and compromising underlying infrastructure. For each vulnerability exposed, I'll share concrete defensive measures developed in the trenches of enterprise AI deployments, including custom admission controllers, GPU isolation patterns, and monitoring strategies specifically crafted for AI workloads.

Abhinav Sharma

Site Reliability Engineer at KodeKloud | Microsoft MVP | GSOC @OpenSUSE | GitHub Campus Expert

Jaipur, India

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top