Session
Mind Games - Exploiting and Defending GenAI Applications
As organizations eagerly adopt generative AI capabilities into their applications, new attack surfaces and vulnerabilities are emerging that traditional app security approaches fail to address. In this presentation, we will examine the unique security challenges posed by Large Language Model (LLM) applications through the lens of the 2025 OWASP Top 10 for LLMs. Through live demonstrations and practical examples, we'll explore critical vulnerabilities including prompt injection, sensitive information disclosure, and system prompt leakage. Attendees will learn how attackers can manipulate LLMs to bypass security controls, access unauthorized information, and exploit excessive agency in GenAI applications. The session will also provide mitigation strategies for developers and security professionals working in this space. Whether you're developing GenAI applications or securing them, this presentation offers essential insights into this rapidly expanding area of application security.
Ken Smith
Director of Offensive Security Learning & Development at Praetorian
Cleveland, Ohio, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top