Agentic Fabric - Create your own AI agent within Fabric without F64
While Fabric introduces advanced AI capabilities like Data Agents, access often requires premium capacities (F64+). This session demonstrates practical techniques for building your own custom AI agents within Fabric using standard capacities (without F64). We focus on leveraging core Fabric components like Notebooks and Pipelines to achieve useful agentic outcomes, such as automating tasks and interacting with Fabric artifacts.
Explore how to effectively combine these tools to create cost-effective, task-specific agents. We will discuss the benefits of this approach, but also highlight potential pitfalls to avoid. Grounded in years of practical experience managing and developing data warehouses, this session focuses on tangible benefits and avoids marketing buzzwords. We will cover understanding the difference between simpler agentic workflows and more developed AI agents, helping you choose the right approach based on real-world scenarios. We will also illustrate specific patterns and discuss potential architectures for extending these techniques to handle more complex automation scenarios within Fabric.
Key Takeaways:
+ Understand how to build simple AI agents and implement agentic workflows using standard Fabric components (Notebooks, Pipelines).
+ Identify scenarios suitable for custom AI agents vs. simpler workflows or standard automation.
+ Recognize potential pitfalls when building custom agents in Fabric (e.g., complexity, error handling, capacity usage).
+ Articulate the benefits of custom agents for specific automation needs within Fabric.
+ Learn techniques for orchestrating tasks across services effectively.
+ Explore patterns for building cost-effective automation solutions within Fabric.
+ Gain insights into integrating custom solutions with existing workflows and alerts.
Beginners guide to building your first VS Code AI agent extension
This session is a beginner-friendly introduction to building your first AI agent as a VS Code extension.
We'll start with the basics of setting up an extension project and explore some VS Code extension fundamentals relevant to creating a simple AI agent.
You'll see a walkthrough of a sample of our AI agent extension on code level helping to improve our company's application.
We'll focus on:
+ blueprint of this agent
+ how it interacts with VS Code
+ how it can be designed to enable features like interacting with a Retrieval Augmented Generation (RAG) system or
+ facilitating a "chat with your database" experience directly within the editor
+ how to leverage Anthropic's Model Context Protocol (MCP) for managing context with AI models
Key aspects of the development process will be discussed, and a GitHub repository with the example code will be shared. This session aims to give you a clear starting point and the confidence to begin experimenting with your own VS Code AI agent ideas, incorporating these advanced AI interaction patterns, even if you're new to extension development.
Key Learnings for Attendees:
+ Get an introduction to VS Code extension development concepts.
+ Learn the initial steps to set up a new VS Code extension project for an AI agent.
+ Understand the basic structure of an AI agent extension that can interact with RAG systems and databases, contextualized by a real-world example.
+ See a demonstration of an AI agent extension operating within VS Code, showcasing potential "chat with your data" functionalities.
+ Leveraging Anthropic's Model Context Protocol (MCP) for effective context management in AI agents.
Discover helpful tips for starting your journey in building VS Code AI agent extensions with these capabilities.
Gemini, Claude, GPT4o and on prem Deepseek walk into a bar – how to build a multi LLM AI agent
This focused technical session shows you the core principles for building an AI agent that uses multiple Large Language Models (LLMs) with Langchain. We'll look at a conceptual way to have the agent send tasks to Gemini, Claude, GPT-4o, or an on-premise Deepseek model, depending on the task. We'll highlight key integration points and practical strategies you can grasp within the session's timeframe.
The session will highlight:
+ Key architectural approaches and reasons for using multi-LLM systems, including how easy it is to swap models for flexibility (like using newer models, changing providers, or replacing ones that are no longer available).
+ Core Langchain elements for making different models work together.
+ Strategies for designing how your agent chooses the right LLM, including an introduction to adaptive selection techniques.
+ A walkthrough of key code snippets illustrating the Langchain integration structure.
+ Straightforward methods for including on-premise models like Deepseek and a look at why pure on-prem solutions can be a good choice.
This session provides the foundational knowledge and a practical starting point for building effective multi-LLM AI agents with Langchain. We'll concentrate on key strategies for model selection and integration that fit into a short session.
Key Learnings for Attendees:
+ Understand common designs and uses for multi-LLM AI agents.
+ Learn about core Langchain techniques for connecting to and managing different LLM providers (like Gemini, Claude, GPT-4o, and a local Deepseek instance).
+ Understand practical strategies for setting up logic to pick the best LLM for a given task or context.
+ Think about how to dynamically select LLMs to keep answers high-quality and relevant, especially with fast-changing topics or sensitive information.
+ See a conceptual code walkthrough of a multi-LLM agent using Langchain, focusing on the main integration points and overall structure, not every tiny detail.
+ Get insights into adding on-premise LLMs (like Deepseek) to a multi-LLM setup and understand when fully on-prem solutions make sense.
+ Explore approaches for building adaptable AI agents that use the strengths of different LLMs and can handle new information, all within a focused session.
Target Audience & Prerequisites: This session offers an introduction to the fundamentals of multi-LLM agent creation using Langchain. It's primarily aimed at AI developers, AI professionals, and AI architects. A foundational understanding of Large Language Models and basic Python programming is essential for all attendees. We won't cover AI or LLM basics in depth; instead, our primary focus will be on the multi-LLM agent architecture and its Langchain implementation. Therefore, this session is geared towards introducing the fundamentals. Those with extensive prior experience in Langchain or advanced AI agent development should note this foundational focus, though they are of course welcome.
Practical AI Agent Enhancement: A Low-Effort, High-Impact Guide
This session is a practical, hands-on guide for developers wanting to significantly upgrade their AI coding agents without inventing the wheel again. We'll focus on smart integrations of readily available tools, APIs, and open-source libraries. Attendees will learn how to add and slightly modify these components.
Attendees will learn to integrate features such as:
+ deep research capabilities (e.g. open deep research)
+ leveraging an open source high-performance coding agent as part of the multi-agent setup (openhands)
+ Model Context Protocol (MCP) for standardized tool and data access
+ a simple alternative to MCP (e.g. Langchain)
+ internet search (e.g. Perplexity)
+ Flexible management and switching between different LLMs (e.g. by using LangChain)
+ a simple alternative to RAG
+ basic evaluation to track and improve agent effectiveness
The GitHub repository will be shared.
Agentic BI - make your lakehouse and data warehouse AI agent ready and boost them with AI agents
AI agents need well-structured and reliable data to function effectively within Microsoft Fabric. Preparing your data assets correctly opens the door for AI agents to perform tasks like automated data quality monitoring, intelligent alerting on anomalies, or metadata enrichment. This session focuses on the practical steps required to prepare existing Lakehouses and Data Warehouses for interaction with custom AI agents, diving into tangible configuration and data structuring techniques.
Learn how to identify key information for an agent, how to expose it correctly within Fabric, and how to ensure the underlying data quality meets the requirements for automated processing. We will cover practical considerations based on real-world consulting experience, focusing on enabling agents to reliably understand and utilize your data assets.
Explore practical topics including:
+ Essential Metadata: Identifying, structuring, and exposing key metadata (like clear descriptions, tags, lineage hints) that agents can use for discovery and understanding within Fabric.
+ Data Quality for Agents: Implementing practical data quality checks and monitoring within Fabric pipelines or notebooks to ensure data consistency agents can trust.
+ Agent-Friendly Structures: Examples of organizing data within Lakehouse zones or designing Data Warehouse tables for easier parsing and access by automated processes.
+ Access & Permissions: Configuring appropriate security and access patterns within Fabric specifically for service principals or identities used by agents.
+ Observability: Key logging and monitoring practices to track agent activity and troubleshoot interactions with your data platforms.
Key Takeaways:
+ Identify critical information and metadata needed by AI agents interacting with Fabric data.
+ Learn practical methods for improving data quality and consistency for automated use.
+ Understand how data structure choices impact AI agent accessibility and performance.
+ Gain insights into configuring secure and appropriate access for agents in Fabric.
+ Discover practical techniques for monitoring agent interactions with Lakehouses and Data Warehouses.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top