Session
Securing the AI/ML Lifecycle With MLSecOps: Open Source Best Practices
AI/ML adoption is accelerating, but security remains an afterthought in many MLOps pipelines. Unlike traditional software, ML systems face unique threats like data poisoning, adversarial manipulation, and model theft. This session introduces MLSecOps, a “secure-by-design” approach that embeds security across the AI/ML lifecycle starting from data preparation and training to deployment and monitoring.
During the session, the presenters elaborate on the current OpenSSF work in this field while mapping OWASP AI/ML threats (e.g., data poisoning, adversarial manipulation, model theft) to concrete mitigations and OSS tools (e.g., Sigstore, SLSA, CycloneDX, Syft) that practitioners can apply today. Attendees will learn how to operationalize MLSecOps in their organizations, improve trust in AI systems, and engage with the OpenSSF AI/ML Security Working Group. By leveraging open source, the community can reduce risk, increase resilience, and lead the way in securing AI innovation.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top