Session
RAG in High-Stakes Systems: Why Retrieval Quality Determines Trust, Safety, and Adoption
Retrieval-Augmented Generation (RAG) is often treated as a simple architectural add-on: retrieve documents, pass them to a model, and generate an answer. In high-stakes and regulated environments, this assumption breaks down quickly. Systems fail not because the language model is weak, but because retrieval quality, relevance, and grounding are poorly understood and insufficiently evaluated.
This session focuses on production-level RAG systems, drawing lessons from deploying AI in environments where incorrect outputs have real consequences. Rather than covering basic RAG patterns or tooling, the talk examines why retrieval is the dominant failure point, how semantic drift and confidence miscalibration emerge over time, and why “more context” often degrades trust instead of improving it.
Attendees will learn how to reason about retrieval quality, design evaluation strategies that go beyond accuracy, and build RAG systems that can safely abstain, explain their grounding, and earn user trust. The principles discussed are domain-agnostic and apply to any high-risk or complex data system, including regulated industries, enterprise platforms, and decision-support tooling.
Guru Lakshmi Priyanka Bodagala
AI & Health Informatics Product Engineer |Applied AI Systems & Data Interoperability
San Francisco, California, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top