Session
Accelerating Training with Multi-GPU: Using PyTorch Lightning for PaLI-Gemma VLM
Dive into the world of high-performance model training with multi-GPU setups using PyTorch Lightning, featuring the PaLI-Gemma Vision-Language Model (VLM). This session will cover the essential tools and techniques for setting up efficient distributed training pipelines, allowing you to fully utilize multiple GPUs to speed up and scale model training. We’ll explore PyTorch Lightning’s seamless integration for handling multi-GPU workloads, making it accessible even for those new to distributed training. Attendees will get a step-by-step guide to configuring PaLI-Gemma for accelerated training, from data handling to model synchronization and optimization strategies. By the end, you'll have the skills needed to harness the power of multi-GPU training to scale multimodal AI models effectively.
Shubham Agnihotri
Senior Manager - Generative AI - IDFC Bank
Mumbai, India
Links
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top