AI and ML for Coders in PyTorch - Helion

ISBN: 9781098199142
stron: 444, Format: ebook
Data wydania: 2025-06-30
Księgarnia: Helion
Cena książki: 220,15 zł (poprzednio: 255,99 zł)
Oszczędzasz: 14% (-35,84 zł)
Eager to learn AI and machine learning but unsure where to start? Laurence Moroney's hands-on, code-first guide demystifies complex AI concepts without relying on advanced mathematics. Designed for programmers, it focuses on practical applications using PyTorch, helping you build real-world models without feeling overwhelmed.
From computer vision and natural language processing (NLP) to generative AI with Hugging Face Transformers, this book equips you with the skills most in demand for AI development today. You'll also learn how to deploy your models across the web and cloud confidently.
- Gain the confidence to apply AI without needing advanced math or theory expertise
- Discover how to build AI models for computer vision, NLP, and sequence modeling with PyTorch
- Learn generative AI techniques with Hugging Face Diffusers and Transformers
Osoby które kupowały "AI and ML for Coders in PyTorch", wybierały także:
- Jak zhakowa 125,00 zł, (10,00 zł -92%)
- Biologika Sukcesji Pokoleniowej. Sezon 3. Konflikty na terytorium 117,27 zł, (12,90 zł -89%)
- Windows Media Center. Domowe centrum rozrywki 66,67 zł, (8,00 zł -88%)
- Podręcznik startupu. Budowa wielkiej firmy krok po kroku 92,14 zł, (12,90 zł -86%)
- Ruby on Rails. Ćwiczenia 18,75 zł, (3,00 zł -84%)
Spis treści
AI and ML for Coders in PyTorch eBook -- spis treści
- Foreword
- Preface
- Who Should Read This Book
- Why I Wrote This Book
- Navigating This Book
- Technology You Need to Understand
- Online Resources
- Conventions Used in This Book
- Using Code Examples
- OReilly Online Learning
- How to Contact Us
- Acknowledgments
- 1. Introduction to PyTorch
- What Is Machine Learning?
- Limitations of Traditional Programming
- From Programming to Learning
- What Is PyTorch?
- Using PyTorch
- Installing Porch in Python
- Using PyTorch in PyCharm
- Using PyTorch in Google Colab
- Getting Started with Machine Learning
- Seeing What the Network Learned
- Summary
- 2. Introduction to Computer Vision
- How Computer Vision Works
- The Fashion MNIST Database
- Neurons for Vision
- Designing the Neural Network
- The Complete Code
- Training the Neural Network
- Exploring the Model Output
- Overfitting
- Early Stopping
- Summary
- 3. Going Beyond the Basics: Detecting Features in Images
- Convolutions
- Pooling
- Implementing Convolutional Neural Networks
- Exploring the Convolutional Network
- Building a CNN to Distinguish Between Horses and Humans
- The Horses or Humans Dataset
- Handling the Data
- CNN Architecture for Horses or Humans
- Adding Validation to the Horses or Humans Dataset
- Testing Horses or Humans Images
- Image Augmentation
- Transfer Learning
- Multiclass Classification
- Dropout Regularization
- Summary
- 4. Using Data with PyTorch
- Getting Started with Datasets
- Exploring the FashionMNIST Class
- Generic Dataset Classes
- ImageFolder
- DatasetFolder
- FakeData
- Using Custom Splits
- The ETL Process for Managing Data in Machine Learning
- Optimizing the Load Phase
- Using the DataLoader Class
- Batching
- Shuffling
- Parallel Data Loading
- Custom Data Sampling
- Parallelizing ETL to Improve Training Performance
- Summary
- 5. Introduction to Natural Language Processing
- Encoding Language into Numbers
- Getting Started with Tokenization
- Using a custom tokenizer
- Using a pretrained tokenizer from Hugging Face
- Turning Sentences into Sequences
- Using out-of-vocabulary tokens
- Understanding padding
- Getting Started with Tokenization
- Removing Stopwords and Cleaning Text
- Stripping Out HTML Tags
- Stripping Out Stopwords
- Stripping Out Punctuation
- Working with Real Data Sources
- Getting Text Datasets
- Getting Text from CSV Files
- Creating training and test subsets
- Getting Text from JSON Files
- Reading JSON files
- Summary
- Encoding Language into Numbers
- 6. Making Sentiment Programmable by Using Embeddings
- Establishing Meaning from Words
- A Simple Example: Positives and Negatives
- Going a Little Deeper: Vectors
- Embeddings in PyTorch
- Building a Sarcasm Detector by Using Embeddings
- Reducing Overfitting in Language Models
- Adjusting the learning rate
- Exploring vocabulary size
- Exploring embedding dimensions
- Exploring the model architecture
- Using dropout
- Using regularization
- Other optimization considerations
- Putting It All Together
- Using the Model to Classify a Sentence
- Visualizing the Embeddings
- Using Pretrained Embeddings
- Summary
- Establishing Meaning from Words
- 7. Recurrent Neural Networks for Natural Language Processing
- The Basis of Recurrence
- Extending Recurrence for Language
- Creating a Text Classifier with RNNs
- Stacking LSTMs
- Optimizing stacked LSTMs
- Using dropout
- Stacking LSTMs
- Using Pretrained Embeddings with RNNs
- Summary
- 8. Using ML to Create Text
- Turning Sequences into Input Sequences
- Creating the Model
- Generating Text
- Predicting the Next Word
- Compounding Predictions to Generate Text
- Extending the Dataset
- Improving the Model Architecture
- Embedding Dimensions
- Initializing the LSTMs
- Embedding layers
- LSTM layers
- Final linear layer
- Variable Learning Rate
- Improving the Data
- Character-Based Encoding
- Summary
- 9. Understanding Sequence and Time Series Data
- Common Attributes of Time Series
- Trend
- Seasonality
- Autocorrelation
- Noise
- Techniques for Predicting Time Series
- Naive Prediction to Create a Baseline
- Measuring Prediction Accuracy
- Less Naive Predictions: Using a Moving Average for Prediction
- Improving the Moving-Average Analysis
- Summary
- Common Attributes of Time Series
- 10. Creating ML Models to Predict Sequences
- Creating a Windowed Dataset
- Creating a Windowed Version of the Time Series Dataset
- Creating and Training a DNN to Fit the Sequence Data
- Evaluating the Results of the DNN
- Tuning the Learning Rate
- Summary
- Creating a Windowed Dataset
- 11. Using Convolutional and Recurrent Methods for Sequence Models
- Convolutions for Sequence Data
- Coding Convolutions
- Experimenting with the Conv1D Hyperparameters
- Using NASA Weather Data
- Reading GISS Data in Python
- Using RNNs for Sequence Modeling
- Exploring a Larger Dataset
- Using Other Recurrent Methods
- Using Dropout
- Using Bidirectional RNNs
- Summary
- Convolutions for Sequence Data
- 12. Concepts of Inference
- Tensors
- Image Data
- Text Data
- Tensors Out of a Model
- Summary
- 13. Hosting PyTorch Models for Serving
- Introducing TorchServe
- Setting Up TorchServe
- Preparing Your Environment
- Setting Up Your config.properties File
- Defining Your Model
- Creating the Handler File
- Creating the Model Archive
- Starting the Server
- Testing Inference
- Going Further
- Serving with Flask
- Creating an Environment for Flask
- Creating a Flask Server in Python
- Summary
- 14. Using Third-Party Models and Hubs
- The Hugging Face Hub
- Using Hugging Face Hub
- Getting a Hugging Face token
- Getting permission to use models
- Configuring Colab for a Hugging Face token
- Using the Hugging Face token in code
- Using a Model From Hugging Face Hub
- Using Hugging Face Hub
- PyTorch Hub
- Using PyTorch Vision Models
- Natural Language Processing
- Other Models
- Summary
- The Hugging Face Hub
- 15. Transformers and transformers
- Understanding Transformers
- Encoder Architectures
- The self-attention layer
- The feedforward network layer
- Layer normalization
- Repeated encoder layers
- The Decoder Architecture
- Understanding token and positional encoding
- Understanding multihead masked attention
- Adding and normalizing
- The feedforward layer
- The linear and Softmax layers
- Encoder Architectures
- The Encoder-Decoder Architecture
- The transformers API
- Getting Started with transformers
- Core Concepts
- Pipelines
- Tokenizers
- The WordPiece tokenizer
- Byte-pair encoding
- SentencePiece
- Summary
- Understanding Transformers
- 16. Using LLMs with Custom Data
- Fine-Tuning an LLM
- Setup and Dependencies
- Loading and Examining the Data
- Initializing the Model and Tokenizer
- Preprocessing the Data
- Collating the Data
- Defining Metrics
- Configuring Training
- Initializing the Trainer
- Training and Evaluation
- Saving and Testing the Model
- Prompt-Tuning an LLM
- Preparing the Data
- Creating the Data Loaders
- Defining the Model
- Training the Model
- Managing data batches
- Handling the loss
- Optimizing for loss
- Evaluation During Training
- Reporting Training Metrics
- Saving the Prompt Embeddings
- Performing Inference with the Model
- The predict function
- Usage example
- Summary
- Fine-Tuning an LLM
- 17. Serving LLMs with Ollama
- Getting Started with Ollama
- Running Ollama as a Server
- Building an App that Uses an Ollama LLM
- The Scenario
- Building a Python Proof-of-Concept
- Creating a Web App for Ollama
- The app.js File
- The Index.html File
- Summary
- 18. Introduction to RAG
- What Is RAG?
- Getting Started with RAG
- Understanding Similarity
- Creating the Database
- Performing a Similarity Search
- Putting It All Together
- Using RAG Content with an LLM
- Extending to Hosted Models
- Summary
- 19. Using Generative Models with Hugging Face Diffusers
- What Are Diffusion Models?
- Using Hugging Face Diffusers
- Image-to-Image with Diffusers
- Inpainting with Diffusers
- Summary
- 20. Tuning Generative Image Models with LoRA and Diffusers
- Training a LoRA with Diffusers
- Getting Diffusers
- Getting Data for Fine-Tuning a LoRA
- Fine-Tuning a Model with Diffusers
- Publishing Your Model
- Generating an Image with the Custom LoRA
- Summary
- Training a LoRA with Diffusers
- Index