Session
Commit Secure AI: Embedding Security in Code, Pipelines, and Production
As AI features become a standard part of modern applications—whether it’s a machine learning model embedded in a backend service or an LLM-powered assistant integrated into user workflows—security must be treated as a first-class concern by developers, not just platform teams.
This talk introduces Secure-by-Design (SbD) for developers working on AI-powered systems. It’s a practical, code-to-deployment strategy that weaves security directly into your AI development workflow—from threat modeling during architecture reviews to CI/CD pipeline enforcement, runtime monitoring, and hardened APIs. Whether you're deploying models via containers, integrating AI APIs, or building real-time systems, you'll learn how to secure what you commit—line by line, function by function.
We’ll explore real-world incidents of AI vulnerabilities—like adversarial exploits, data poisoning, and insecure inference endpoints—and show how teams that adopt SbD practices early reduce CVEs, accelerate remediation, and ship confidently. This session highlights tools and techniques that developers can immediately apply: secure model wrappers, permissioned model APIs, input sanitization, adversarial testing, and observability integration into tools like GitHub Actions, Terraform, and Kubernetes.
If you write code that touches AI, this session will help you own your security posture. The era of AI-enhanced development demands AI-secure code—and that starts with the people committing it.

Vasanth Mudavatu
Birla Institute of Technology and Science, Pilani, India
McKinney, Texas, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top