Session
Retrieval Augmented Generation (RAG) power by Azure AI Search
Retrieval Augmentation Generation (RAG) is a sophisticated system designed to augment the capabilities of a Large Language Model (LLM), such as ChatGPT. This enhancement is achieved by integrating an information retrieval system that provides grounding data. This integration allows for precise control over the grounding data utilized by the LLM when formulating responses.
In an enterprise setting, the RAG architecture facilitates the confinement of generative AI to specific enterprise content. This content can be derived from vectorized documents, images, and other data formats, contingent upon the availability of embedding models for such content.
In this session, we will delve into the concept of RAG and learn how to implement a RAG architecture. This architecture encompasses an Application User Experience (App UX) in the form of a web application for user interaction, an Application Server or Orchestrator serving as the integration and coordination layer, Azure AI Search functioning as the information retrieval system, and Azure OpenAI acting as the LLM for generative AI. This comprehensive learning experience promises to equip attendees with a robust understanding of RAG and its practical implementation.
Juan Pablo Garcia Gonzalez
Solution Architect @ AWS Startups
Boston, Massachusetts, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top