Session

Outsmarting AI: Understanding, Preventing, and Defending Against Adversarial Attacks

Artificial Intelligence has revolutionized numerous fields, yet its vulnerability to manipulations poses a significant challenge. Deceptively simple alterations can lead a model to make glaringly incorrect predictions, a phenomenon known as an adversarial attack.

In this session, we will dive deep into the world of adversarial attacks, exploring how they function and why AI systems fall victim to them. We'll scrutinize various forms of attacks, unpacking their methodologies and implications. Understanding these techniques is key to fortifying our AI systems against potential threats.

Having examined the problem, we'll then turn our attention to solutions. We will introduce and explain two robust defense methods.

Finally, we will demonstrate how to evaluate the robustness of AI models against adversarial attacks. By assessing model performance under adversarial conditions, we can gauge the effectiveness of our defense strategies and fine-tune them for improved protection.

By the end of this session, participants will have gained a comprehensive understanding of adversarial attacks, learned effective defense strategies, and been equipped with techniques to evaluate model robustness.

Serg Masis

Lead Data Scientist, Syngenta ● Bestselling Author of ML/AI books

Raleigh, North Carolina, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top