Session
Fine-Tuning Large Language Models with Declarative ML Orchestration
Large Language Models used in tools like ChatGPT are everywhere; however, only a few organisations with massive computing resources are capable of training such large models. While eager to fine-tune these models for specific applications, the broader ML community often grapples with significant infrastructure challenges.
In the session, the audience will understand how open-source ML tooling like Flyte (a Linux Foundation open-source orchestration platform) can be used to provide a declarative specification for the infrastructure required for a wide array of ML workloads, including the fine-tuning of LLMs, even with limited resources. Thus the attendee will learn how to leverage open-source ML toolings like Flyte's capabilities to streamline their ML workflows, overcome infrastructure constraints, reduce cost and unlock the full potential of LLMs in their specific use case. Thus making it easier for a larger audience to leverage and train LLMs.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top