Session

What Does Security Look Like When Building AI?

Anyone who is working with AI or considering doing so should care about security. When building an AI-powered system or product, the traditional attack surfaces and mitigations still apply. However, AI introduces new attack surfaces depending on the techniques used, and its higher levels of automation mean small failures or compromises can be amplified quickly, increasing both the speed and scale of harm.

In this talk, we’ll discuss how AI systems share many attack vectors with traditional software, and what those attacks look like in practice. We’ll also examine AI-specific attacks such as data poisoning, prompt injection, model extraction, and inference-based data leakage, using real-world incidents across generative AI, machine learning, computer vision, and optimization systems. For each class of attack, we’ll focus on system-level mitigations and the tradeoffs involved, rather than one-size-fits-all solutions.

You don’t need prior experience with AI or security to benefit from this session. You’ll see practical techniques for building more secure AI-enabled software, develop a clearer mental model of AI-specific risks, and leave better equipped to continue learning as the AI security landscape evolves.

This talk was given at CodeMash 2024, StirTrek 2024, and Momentum 2024. However, the content has been refreshed for recent changes in the landscape, breaches, and mitigations.

Robert Herbig

AI Practice Lead at SEP

Indianapolis, Indiana, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top