Session
LLM on a budget - finetune our own LLM Model
As more open-source LLM models are released, we want to test out these models for our use cases. The hardware requirements to run and finetune such a large model are often a limiting factor.
We will see how to build prompts for training and different options like distillation, quantization, and low-rank adaption to reduce memory usage during training and prediction.
These are the keys to loading these models on consumer hardware. Even Finetuning free Google Colab-Notebooks is possible.
This session will explain how this works and how you can finetune your LLM.
Erik Bamberg
Java Expert, Vector Database & Machine Learning Enthusiast, experienced public Speaker
Glasgow, United Kingdom
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top