Session

Distributed Fine Tuning of Open LLMs on Kubernetes

Open LLMs are a family of ML models that can be fine-tuned on your own custom dataset to perform a variety of tasks, such as text generation, translation, and summarization. Combined with Kubernetes, you can unlock the open source AI innovations with scalability, reliability, and ease of management.
In this session, we will deep dive into how you can fine-tune Open LLMs on a Kubernetes cluster. We will also explore options for serving LLms on Kubernetes with accelerators and Open Source tools

Abdel Sghiouar

Cloud Developer Advocate

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top