Session
Confuse, Obfuscate, Disrupt: Using Adversarial Techniques for Better AI and True Anonymity
In a world where algorithms dictate what we see, buy, and believe, understanding how to disrupt and manipulate them is as powerful as knowing how to build them. This session dives into adversarial techniques that challenge the assumptions of AI/ML models. By introducing noise, obfuscating data, and exploiting algorithmic learning cycles, we can uncover hidden biases and vulnerabilities in AI systems. These techniques push the boundaries of ML training, offering developers a toolkit for crafting more resilient models.
Beyond improving AI, these techniques offer a unique avenue for achieving digital anonymity. Whether you want to obfuscate your online footprint or prevent data collectors from profiling you, adversarial inputs provide a practical path forward. In this session, attendees will learn how adversarial methods can transform vision and NLP models, empower individuals to take control of their privacy, and showcase live demonstrations of these disruptive strategies in action.

David vonThenen
AI/ML Engineer | Keynote Speaker | Building Scalable AI Architectures & ML Solutions | Python, Go, C++
Long Beach, California, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top