Session
Harnessing Large Language Models with Databricks: Advanced RAG and Multi-Stage Reasoning
Dive into the world of Large Language Models (LLMs) and learn how to leverage Databricks for advanced AI applications. This session will explore the principles of Retrieval-Augmented Generation (RAG) and multi-stage reasoning workflows with LLMs. Participants will gain insights into vector search strategies, vector databases, and best practices for improving search-retrieval performance. We will also cover the integration of LLMs into complex workflows using tools like LangChain, enabling tasks that require multi-stage reasoning and agent-based interactions.
Learning Objectives:
By the end of this session, participants will be able to:
-Explain vector search strategies and evaluate search results.
-Define the utility and applications of vector databases.
-Implement Retrieval-Augmented Generation workflows using vector search and LLMs.
-Apply LangChain to build complex logic flows with agents for multi-stage reasoning.
-Utilize best practices for enhancing search-retrieval performance and integrating LLMs into larger workflows.
Rajaniesh Kaushikk
Director Technology | Microsoft MVP | Databricks MVP| Databricks Champion | MCT | Author | Blogger
Dunellen, New Jersey, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top