Session
Beyond ChatGPT: RAG and Fine-Tuning
In most real-world applications, ChatGPT alone is insufficient. Businesses seek to utilize their own private documents to obtain factually accurate answers. Over the past year, two techniques have emerged to address this issue.
Retrieval Augmented Generation (RAG) employs text embedding to identify relevant snippets and incorporate them into a prompt for a Large Language Model (LLM) to expand upon. Conversely, fine-tuning involves updating the weights of the LLM with training episodes based on specific documents. Since training LLMs is notoriously costly, fine-tuning often incorporates advanced methods such as low-rank adaptation and quantization.
This lecture will delve into both RAG and fine-tuning, discussing the latest techniques for achieving optimal results. We will examine the pros and cons of each technique and discuss real-world applications for both.
Attendees will leave with a thorough understanding of the primary techniques used to enhance and ground LLM knowledge, along with insights into their main industry applications.
First held at UniPV guest lecture 2024, Pavia, Italy
Emanuele Fabbiani
Head of AI at xtream, Professor at Catholic University of Milan
Milan, Italy
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top