Session

Securing Open Source AI Frameworks and Workflows: Best Practices and Emerging Trends

In this presentation, we will explore the key security challenges facing open source AI and discuss best practices for mitigating these risks. We will cover topics such as:

Common security vulnerabilities in popular open source AI frameworks (e.g., TensorFlow, PyTorch, Scikit-learn)
Securing AI development workflows, including data pipelines, model training, and deployment
Implementing secure coding practices and conducting security audits for AI codebases
Managing dependencies and mitigating risks from third-party libraries and tools
Protecting sensitive data used in AI training and inference, including techniques like differential privacy and federated learning
Addressing privacy concerns and complying with relevant regulations (e.g., GDPR, CCPA) in AI contexts
Emerging trends and tools for enhancing AI security, such as confidential computing, trusted execution environments, and blockchain-based solutions.

Attendees of this presentation will leave with a deeper understanding of the security landscape in open source AI, as well as practical insights and strategies for building and deploying AI systems that are both innovative and secure.

Vaibhav Malik

Vaibhav Malik, Partner Solutions Architect, Cloudflare

St. Louis, Missouri, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top