Session
Threat Modeling the AI Stack: Securing Data and Model Pipelines from Ingestion to Inference
As organizations accelerate AI adoption, the security of data and AI pipelines often lags behind model performance. From poisoned datasets and compromised feature stores to model exfiltration and prompt injection, each stage of the AI lifecycle introduces new, often misunderstood risks. This talk presents a practical framework for threat modeling across modern AI stacks — from raw data ingestion and training pipelines to model deployment and API exposure.
Using real-world examples and case studies, we’ll explore:
Common attack vectors in data and AI workflows
Misconfigurations in MLOps platforms and how attackers exploit them
How traditional threat modeling approaches (e.g., STRIDE) can be extended for AI
Tools and controls to secure pipelines at each stage
Whether you're a security architect, data engineer, or ML practitioner, you’ll leave with actionable strategies to harden AI systems end-to-end — and a threat model template you can apply immediately in your environment.
Co-presented with Ike Ellis
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top