Session

Working with Gemma and Open LLMs on Google Kubernetes Engine

The Gemma family of open models can be fine-tuned on your own custom dataset to perform a variety of tasks, such as text generation, translation, and summarization. Combined with Kubernetes, you can unlock the open source AI innovations with scalability, reliability, and ease of management.

In this workshop, you will learn through a guided hands-on exercise how you can work with Gemma and fine-tune it on a Kubernetes cluster. We will also explore options for serving Gemma on Kubernetes with accelerators and Open Source tools.

Abdel Sghiouar

Cloud Developer Advocate

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top