Session
Weaponizing AI: Adversarial Attacks, Hallucinations, and the Offensive Security Frontier
As artificial intelligence becomes more integral to critical systems, it increasingly serves as a vector for offensive security attacks. Our session, "Weaponizing AI," delves into the murky intersection of AI hallucinations, adversarial attacks, and the development of offensive techniques exploiting these vulnerabilities. We will explore how attackers use adversarial techniques to induce AI hallucinations, which deceive systems into producing false data and disrupt machine perception, thereby opening new avenues for attacks in sectors like autonomous vehicles, healthcare, and financial systems.
We'll dissect the mechanics of adversarial examples, backdoor attacks, and model poisoning, demonstrating how these tactics compromise the integrity of AI systems in mission-critical environments. The session will also cover effective defense strategies, such as red-teaming AI models, employing anomaly detection, and using adversarial training to bolster system defenses.
Key Takeaways:
1) An understanding of how adversarial attack vectors exploit AI hallucinations and failure modes.
2) Insights into offensive techniques like adversarial examples, model manipulation, and backdoor attacks.
3) Defense strategies including adversarial training and anomaly detection to mitigate risks from AI manipulations.
Aviral Srivastava
Offensive security for the age of machine intelligence
Sunnyvale, California, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top