Session
What Does Security Look Like When Building AI?
Anyone who is working with AI or considering doing so should care about security. When considering building an AI-powered system or product, the traditional attack surfaces and mitigations still apply. However, new attack surfaces can be present depending on the specific AI approaches used. In addition, due to the typically higher level of automation in AI systems, they can do more harm if they are compromised.
In this talk, we’ll discuss how AI has the same attack vectors as traditional software, and what those attacks look like. We’ll also discuss new attacks that are specific to generative AI (e.g. LLMs like ChatGPT), machine learning & computer vision systems, and optimization techniques. For each type of attack, we’ll point out how they can be thwarted, or at least mitigated.
Previous experience with AI and security are not required to benefit from the session. Attendees will see techniques to help write more secure AI-enabled software. They will walk away with a better understanding of AI-specific attack vectors and their mitigations. They will be equipped to find security education resources in the future.
Previous experience with AI and security are not required to benefit from the session. The goal is not to teach attendees the intricacies of the techniques, but rather to give them the lay of the land and the key terms to google when they leave.

Robert Herbig
AI Practice Lead at SEP
Indianapolis, Indiana, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top