Session

Invisible Infiltration of AI Supply Chains: Protective Measures from Adversarial Actors

Malicious human and AI actors can infiltrate AI supply chains, compromising the integrity and reliability of the resultant AI systems through training data tampering, software or model backdoors, model interference, or new runtime attacks against the model or its hosting infrastructure.

This talk examines the importance of securing the data, models, and pipelines involved at each step of an AI supply chain. We evaluate the efficacy of emerging industry best practices and risk assessment strategies gathered from the FINOS AI Readiness Working Group, TAG Security Kubeflow joint assessment, and case studies with air-gapped and cloud-based AI/ML deployments for regulated and privacy-protecting workloads.

In this talk, we:
- threat model an AI system, from supply chain, through training and tuning, to production inference and integration
- implement quantified security controls and monitoring mechanisms for an AI enterprise architecture
- mitigate the risks associated with adversarial attacks on AI systems
- address compliance and regulation requirements with actionable remediations
- look to accelerate AI adoption while balancing minimum viable security measures

Vicente Herrera

Principal Consultant at Control Plane

Alcalá de Guadaira, Spain

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top