Session
Breaking RAG Systems: Exploiting Vulnerabilities & Hardening Your GenAI Applications
Retrieval Augmented Generation (RAG) systems are quickly becoming the backbone of enterprise GenAI applications, but they introduce unique security risks that most teams overlook. In this hands-on session, I'll demonstrate real vulnerabilities I've discovered in production RAG systems and show you exactly how to fix them. We'll start by breaking things - I'll perform live attacks including:
1. Hallucination injection that makes models confidently return false information
2. Prompt manipulation that bypasses business logic restrictions
3. Vector database poisoning that compromises RAG results
Then, we'll fix each vulnerability step-by-step:
1. Securing your vector database against poisoning attacks
2. Building multi-stage guardrails that catch manipulated inputs
3. Implementing robust retrieval validation techniques
You'll leave with practical code patterns and configurations you can immediately apply to your own RAG applications.

Abhinav Sharma
Site Reliability Engineer at KodeKloud | Microsoft MVP | GSOC @OpenSUSE | GitHub Campus Expert
Jaipur, India
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top