Session

Use an LLM 'off the grid' with local models

We've all got used to using LLMs in our developer workflow - from asking ChatGPT what tools and libraries to use, to getting GitHub copilot to generate code for us. Great when you are online, but not so useful when you are offline, like on the London Underground, in a plane with no WiFi, or in the middle of nowhere. But what if there was another way?

In this session, Jim will introduce offline LLMs using SLMs - small language models. We'll look at how you can run LLMs locally, such as Phi-3.5 from Microsoft, and add these to your developer workflow. We'll compare the performance of offline vs online, both the speed and quality, but also touch on privacy and other considerations. We'll also look at hardware requirements as we don't all have the latest GPUs to hand, showing how these models can run not only on very powerful laptops, but also on small devices like a Raspberry Pi for an LLM on your home network.

By the end of this session you will have an understanding of the technical differences between small language models and large language models, see how you can use them, and understand their advantages and limitations.

Jim Bennett

Principal Developer Advocate at Galileo

Redmond, Washington, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top