Session
Behind the Binaries: Cracking Compiled AI for Vulnerabilities
This presentation explores the risks and techniques involved in reverse engineering AI models, focusing on how attackers can extract and exploit AI models for adversarial attacks. We’ll cover vulnerabilities in popular model formats like ONNX and TFLite, as well as the challenges of reversing more complex models compiled with systems like TVM and Glow, emphasizing the need for stronger AI security practices.

Jason Kramer
ObjectSecurity, Senior Software Engineering Researcher
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top