Session
Gemini, Claude, GPT4o and on prem Deepseek walk into a bar – how to build a multi LLM AI agent
This focused technical session shows you the core principles for building an AI agent that uses multiple Large Language Models (LLMs) with Langchain. We'll look at a conceptual way to have the agent send tasks to Gemini, Claude, GPT-4o, or an on-premise Deepseek model, depending on the task. We'll highlight key integration points and practical strategies you can grasp within the session's timeframe.
The session will highlight:
+ Key architectural approaches and reasons for using multi-LLM systems, including how easy it is to swap models for flexibility (like using newer models, changing providers, or replacing ones that are no longer available).
+ Core Langchain elements for making different models work together.
+ Strategies for designing how your agent chooses the right LLM, including an introduction to adaptive selection techniques.
+ A walkthrough of key code snippets illustrating the Langchain integration structure.
+ Straightforward methods for including on-premise models like Deepseek and a look at why pure on-prem solutions can be a good choice.
This session provides the foundational knowledge and a practical starting point for building effective multi-LLM AI agents with Langchain. We'll concentrate on key strategies for model selection and integration that fit into a short session.
Key Learnings for Attendees:
+ Understand common designs and uses for multi-LLM AI agents.
+ Learn about core Langchain techniques for connecting to and managing different LLM providers (like Gemini, Claude, GPT-4o, and a local Deepseek instance).
+ Understand practical strategies for setting up logic to pick the best LLM for a given task or context.
+ Think about how to dynamically select LLMs to keep answers high-quality and relevant, especially with fast-changing topics or sensitive information.
+ See a conceptual code walkthrough of a multi-LLM agent using Langchain, focusing on the main integration points and overall structure, not every tiny detail.
+ Get insights into adding on-premise LLMs (like Deepseek) to a multi-LLM setup and understand when fully on-prem solutions make sense.
+ Explore approaches for building adaptable AI agents that use the strengths of different LLMs and can handle new information, all within a focused session.
Target Audience & Prerequisites: This session offers an introduction to the fundamentals of multi-LLM agent creation using Langchain. It's primarily aimed at AI developers, AI professionals, and AI architects. A foundational understanding of Large Language Models and basic Python programming is essential for all attendees. We won't cover AI or LLM basics in depth; instead, our primary focus will be on the multi-LLM agent architecture and its Langchain implementation. Therefore, this session is geared towards introducing the fundamentals. Those with extensive prior experience in Langchain or advanced AI agent development should note this foundational focus, though they are of course welcome.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top