Session

LLMOps-driven fine-tuning, evaluation, and inference with NVIDIA NIM & NeMo Microservices

As the adoption of LLMs continues to grow, the complexity of fine-tuning, and deploying these models has become a significant bottleneck. Manual processes and fragmented workflows can lead to errors, inconsistencies, and delays that hinder innovation and progress. This hands-on workshop introduces LLMOps, an approach to automating the entire LLM evaluation, and inference lifecycle using a GitOps-based methodology.

Participants will learn how to build an end-to-end automated pipeline leveraging NVIDIA NIMs and Nemo Microservices for fine-tuning, evaluation, and deployment of LLMs. Through practical demonstrations, we will explore how to ensure seamless integration, validation, and deployment of updates, leading to faster development cycles, improved accuracy, and increased reliability.

Key Topics:
- Kubernetes-based LLM Pipelines
- Argo CD for Continuous Delivery
- Argo Workflows for LLM Workflow Automation
- Cloud-Agnostic Deployment

Anshul Jindal

Sr. Solution Architect

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top