Session

Local Reasoning Model Deployment with AutoGen Between Ollama and LM Studio

In this session, we will explore how to deploy local reasoning models using AutoGen for .NET, facilitating seamless interactions between Ollama and LM Studio. We’ll cover the fundamentals of setting up and configuring these tools, optimizing local inference for reasoning tasks, and leveraging AutoGen to orchestrate multi-agent workflows.

Attendees will gain hands-on experience via demonstration in integrating locally hosted LLMs for efficient, private, and cost-effective AI applications.

Charunthon Limseelo

Microsoft Learn Student Ambassadors

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top