Session

API Security for the AI Era: Detecting and Preventing Adversarial Manipulation

In a digital landscape dominated by APIs and AI, security threats from adversarial manipulation have become critical risks. This session explores the intersection of APIs, AI security, and adversarial attacks. We'll dissect how adversaries manipulate APIs feeding data to machine learning models—by injecting noise, crafting misleading inputs, and exploiting data obfuscation techniques—to compromise model integrity and security. Attendees will gain insights into real-world adversarial scenarios, learn practical defensive techniques, and understand the implications for privacy, model fairness, and data reliability.

The session will provide practical examples and live demonstrations showcasing how adversarial strategies can exploit API vulnerabilities to undermine AI models. We'll examine defensive frameworks and best practices for securing APIs against adversarial attacks, ensuring data integrity, maintaining privacy compliance, and reinforcing ethical AI usage. By the end, attendees will be equipped with strategies for hardening their AI-driven APIs, proactively identifying vulnerabilities, and deploying robust security measures to mitigate adversarial threats.

David vonThenen

AI/ML Engineer | Keynote Speaker | Building Scalable AI Architectures & ML Solutions | Python, Go, C++

Long Beach, California, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top