Accelerate Model Training with PyTorch 2.X. Build more accurate models by boosting the model training process - Helion
Tytuł oryginału: Accelerate Model Training with PyTorch 2.X. Build more accurate models by boosting the model training process
ISBN: 9781805121916
stron: 230, Format: ebook
Data wydania: 2024-04-30
Księgarnia: Helion
Cena książki: 116,10 zł (poprzednio: 129,00 zł)
Oszczędzasz: 10% (-12,90 zł)
This book, written by an HPC expert with over 25 years of experience, guides you through enhancing model training performance using PyTorch. Here you’ll learn how model complexity impacts training time and discover performance tuning levels to expedite the process, as well as utilize PyTorch features, specialized libraries, and efficient data pipelines to optimize training on CPUs and accelerators. You’ll also reduce model complexity, adopt mixed precision, and harness the power of multicore systems and multi-GPU environments for distributed training. By the end, you'll be equipped with techniques and strategies to speed up training and focus on building stunning models.
Osoby które kupowały "Accelerate Model Training with PyTorch 2.X. Build more accurate models by boosting the model training process", wybierały także:
- Windows Media Center. Domowe centrum rozrywki 66,67 zł, (8,00 zł -88%)
- Ruby on Rails. Ćwiczenia 18,75 zł, (3,00 zł -84%)
- Przywództwo w świecie VUCA. Jak być skutecznym liderem w niepewnym środowisku 58,64 zł, (12,90 zł -78%)
- Scrum. O zwinnym zarządzaniu projektami. Wydanie II rozszerzone 58,64 zł, (12,90 zł -78%)
- Od hierarchii do turkusu, czyli jak zarządzać w XXI wieku 58,64 zł, (12,90 zł -78%)
Spis treści
Accelerate Model Training with PyTorch 2.X. Build more accurate models by boosting the model training process eBook -- spis treści
- 1. Deconstructing the Training Process
- 2. Training Models Faster
- 3. Compiling the Model
- 4. Using Specialized Libraries
- 5. Building an Efficient Data Pipeline
- 6. Simplifying the Model
- 7. Adopting Mixed Precision
- 8. Distributed Training at a Glance
- 9. Training with Multiple CPUs
- 10. Training with Multiple GPUs
- 11. Training with Multiple Machines