Session

LLM Lockdown: Defending Against OWASP's Top 10 AI Nightmares

In the rapidly evolving world of AI, Large Language Models (LLMs) power innovative applications but also introduce unique security vulnerabilities that can lead to data breaches, ethical lapses, and operational chaos. This session dives into the security issues identified in the OWASP Top 10 for LLM Applications (2025 edition), providing a fast-paced, actionable overview of the most critical risks facing AI developers today.

We'll explore each of the top 10 vulnerabilities, including real-world examples and mitigation strategies. The list will cover
how crafted prompt injection inputs can hijack LLMs, risking sensitive information disclosure; how supply chain vulnerabilities in models, dependencies and data poisoning from tampered datasets can threaten security; and how improper output handling and malicious / mis-information attacks can lead to data leakage and unbounded resource consumption and system corruption.

This session is ideal for developers, security professionals, and AI enthusiasts looking to stay ahead in the AI security landscape.
Attendees will leave with a clear roadmap to identify, assess, and mitigate these risks in their own projects, empowering them to build resilient, secure LLM applications that stand up to emerging threats.

David Lucas

Kotlin Server-Side Enthusiast

Columbus, Ohio, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top