Session
Navigating the lifecycle of LLMs: A focus on Efficiency, Scalability, Observability and AI Ops
In the rapidly evolving field of AI, particularly Gen-AI, establishing best practices for Large Language Models (LLMs) that are applicable across various use-cases is crucial. As the number of new tools and frameworks focusing on different parts of the lifecycle increases, their cohesive integration becomes essential for any production-ready setup. This session explores cloud-native tools and frameworks that enable a smooth transition from proof-of-concept to production. We’ll showcase how to efficiently fine-tune LLMs with accelerators and deploy optimized models using scalable and observable tools. We’ll also discuss the lifecycle of an LLM in a Kubernetes-powered multi-node environment, from data preprocessing to model deployment and AI observability. The session aims to provide a comprehensive view of the current landscape, emphasizing the importance of a seamless LLM lifecycle and how it can be implemented in your ML infrastructure.

Prem Pradeep Motgi
Senior Systems Development Engineer at Dell Technologies
Austin, Texas, United States
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top