Session

Approaching Distributed Training of ML Models

In today's era of large-scale machine learning models, training on a single machine often becomes impractical due to resource constraints and time limitations. Distributed training provides an efficient solution by leveraging multiple computing resources to accelerate model training and handle larger datasets. This talk explores various approaches to distributed training, including data and model parallelism, synchronous and asynchronous strategies, using frameworks like TensorFlow and PyTorch.

Mahak Shah

Splunk, Software Engineer P3

Seattle, Washington, United States

Actions

Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.

Jump to top