Session

RAG-atouille

Retrieval Augmented Generation (RAG), similar to the artistry of Remy’s Ratatouille, combines the brilliance of Large Language Models (LLMs) with the precision of information retrieval. Just as Remy layers flavors in his dish, RAG-fusion seamlessly blends vectorized documents, images, audio, and video to craft nuanced responses in AI-powered applications. The RAG-multi index, like Gusteau’s secret spice blend, optimizes data organization, allowing LLMs to access a rich pantry of knowledge. And much like Anton Ego’s discerning palate, RAG-search ranking ensures that the most relevant insights rise to the top. Vector DB, our culinary laboratory, refines this recipe for a delectable user experience.

This presentation focuses on the latest architectural pattern called Retrieval Augmented Generation (RAG). I’ll start with a beginner-friendly introduction to why RAG is essential. Then, we’ll dive into practical implementation and design considerations, leveraging various data stores—both vectorized and non-vectorized. Finally, I’ll explore different RAG variations, including RAG-fusion, multi-index, and search ranking. Throughout, I’ll share real-world examples from my own experiences working with diverse customers and applications in this field, all of this with a pinch of "Ratatouille" (as the movie).

Soham Dasgupta

Cloud Solution Architect @ Microsoft

Utrecht, The Netherlands

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top