Session
Everything You Need to Know About Fine-Tuning LLMs
In this speech, we will explore the process of fine-tuning large language models (LLMs) to transform a general-purpose base model into a specialized tool tailored for specific applications.
We will then delve into the diverse ecosystem of fine-tuning libraries, such as TRL by Hugging Face, Unsloth, Axolotl, and LLaMA Factory.
We will compare various fine-tuning techniques from full fine-tuning to parameter-efficient approaches such as Low-Rank Adaptation (LoRA) and quantized fine-tuning.
We will also discuss key hyperparameters such as learning rate, number of epochs, sequence length, and batch size to understand their impact on training stability and efficiency
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top