Session
Prompt Injection, Poisoning & More: The Dark Side of LLMs
This talk provides an in-depth look at the key security risks in LLM-based systems and applications. I will explore threats such as prompt injection, data poisoning, model leakage, and other vulnerabilities, explaining their impact, demonstrating real-world examples, and discussing effective mitigation strategies.
Attendees will gain practical insights into securing LLM implementations across different use cases, from customer-facing applications like chatbots and AI assistants to internal tools and deeply integrated AI-driven processes. This session is designed for technical and leadership roles alike, equipping them with the knowledge to proactively address security risks before they become critical threats.
Keno Dreßel
Principal Consultant & Head of AI @ SQUER
Munich, Germany
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top