Session

Using ReACT + RAG to augment your LLM-based applications

Large Language Models (LLMs) have some limitations – such as being unable to answer questions about data the model wasn’t trained on or hallucinating fake or misleading information. RAG is the concept of retrieving some data to augment your prompt to the LLM, allowing it to generate more accurate responses and reduce hallucinations. ReACT (Reason and Act) is a prompting technique to guide LLM to verbally express their reasoning and adapt its plan based on data from external sources. In this talk, we'll learn about RAG and ReACT and how using ReACT + RAG together can help to extend and improve accuracy of your LLM-based applications through a sample app.

Mete Atamel

Software Engineer and Developer Advocate at Google

London, United Kingdom

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top