Session

Deploy your local LLM without leaving your comfort zone

The promise of running powerful LLMs like Llama 2 or Mistral locally is compelling, offering unparalleled data privacy and cost control. However, the reality often involves navigating a maze of environment setup, CUDA drivers, and platform-specific quirks that can derail development.

In this session, we present a solution: the Docker Model Runner. We will explore how containerization solves the core challenges of local LLM deployment by providing isolated, reproducible, and portable environments. You will learn how to use this tool to pull, configure, and run state-of-the-art models with a simple CLI, abstracting away the underlying complexity.

We will cover practical use cases, from rapid prototyping and development to building scalable, containerised inference services. Attendees will leave with a clear, practical framework for integrating private LLMs into their applications, finally making local deployment as accessible as it is powerful.

Thierry Njike

Research Engineer @ CETIC

Brussels, Belgium

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top