Deep Learning from Scratch. Building with Python from First Principles - Helion
ISBN: 978-14-920-4136-8
stron: 252, Format: ebook
Data wydania: 2019-09-09
Księgarnia: Helion
Cena książki: 194,65 zł (poprzednio: 226,34 zł)
Oszczędzasz: 14% (-31,69 zł)
With the resurgence of neural networks in the 2010s, deep learning has become essential for machine learning practitioners and even many software engineers. This book provides a comprehensive introduction for data scientists and software engineers with machine learning experience. You’ll start with deep learning basics and move quickly to the details of important advanced architectures, implementing everything from scratch along the way.
Author Seth Weidman shows you how neural networks work using a first principles approach. You’ll learn how to apply multilayer neural networks, convolutional neural networks, and recurrent neural networks from the ground up. With a thorough understanding of how neural networks work mathematically, computationally, and conceptually, you’ll be set up for success on all future deep learning projects.
This book provides:
- Extremely clear and thorough mental models—accompanied by working code examples and mathematical explanations—for understanding neural networks
- Methods for implementing multilayer neural networks from scratch, using an easy-to-understand object-oriented framework
- Working implementations and clear-cut explanations of convolutional and recurrent neural networks
- Implementation of these neural network concepts using the popular PyTorch framework
Osoby które kupowały "Deep Learning from Scratch. Building with Python from First Principles", wybierały także:
- GraphQL. Kurs video. Buduj nowoczesne API w Pythonie 169,00 zł, (50,70 zł -70%)
- Receptura na Python. Kurs Video. 54 praktyczne porady dla programist 199,00 zł, (59,70 zł -70%)
- Podstawy Pythona z Minecraftem. Kurs video. Piszemy pierwsze skrypty 149,00 zł, (44,70 zł -70%)
- Twórz gry w Pythonie. Kurs video. Poznaj bibliotekę PyGame 249,00 zł, (74,70 zł -70%)
- Data Science w Pythonie. Kurs video. Algorytmy uczenia maszynowego 199,00 zł, (59,70 zł -70%)
Spis treści
Deep Learning from Scratch. Building with Python from First Principles eBook -- spis treści
- Preface
- Understanding neural networks requires multiple mental models
- Chapter outlines
- Conventions Used in This Book
- Using Code Examples
- OReilly Online Learning
- How to Contact Us
- Acknowledgments
- 1. Foundations
- Functions
- Math
- Diagram
- Code
- Code caveat #1: Numpy
- Code caveat #2: Type checked functions
- Basic functions in Numpy
- Derivatives
- Math
- Diagram(s)
- Code
- Nested Functions
- Diagram
- Math
- Code
- Another diagram
- The Chain Rule
- Math
- Diagram
- Code
- Math
- A slightly longer example
- Math
- Diagram
- Code
- Functions With Multiple Inputs
- Math
- Diagram
- Code
- Derivatives of Functions with Multiple Inputs
- Diagram
- Math
- Code
- Functions with multiple vector inputs
- Math
- Creating new features from existing features
- Math
- Diagram
- Code
- Derivatives of functions with multiple vector inputs
- Diagram
- Math
- Code
- Vector functions and their derivatives - one step further
- Diagram
- Math
- Code
- Vector functions and their derivatives: the backward pass
- Math
- Diagram
- Code
- Is this right?
- Computational Graph with Two 2D Matrix Inputs
- Math
- Diagram
- Code
- The fun part: the backward pass
- Diagram
- Math
- The ?
- The Answer
- Code
- Describing these gradients visually
- Conclusion
- Functions
- 2. Fundamentals
- Supervised Learning Overview
- Supervised Learning models
- Linear regression
- Linear regression: a diagram
- Training this model
- Linear regression: a more helpful diagram (and the math)
- Adding in the intercept
- Linear regression: the code
- Linear regression: a diagram
- Training the model
- Calculating the gradients: a diagram
- Calculating the gradients: the math (and some code)
- Calculating the gradients: the (full) code
- Using these gradients to train the model
- Assessing our model: training set vs. testing set
- Assessing our model: the code
- Analyzing the most important feature
- Neural networks from scratch
- Step 1: a bunch of linear regressions
- Step 2: a non-linear function
- Step 3: another linear regression
- Diagram(s)
- Another diagram?
- Code
- Neural networks: the backward pass
- Diagram
- Math (and code)
- The overall loss gradient
- Training and assessing our first neural network
- Two reasons why this is happening
- Conclusion
- 3. Deep Learning From Scratch
- Deep Learning definition: a first pass
- The building blocks of neural networks: Operations
- Diagram
- Code
- The building blocks of neural networks: Layers
- Diagrams
- Connection to the brain
- Diagrams
- Building blocks on building blocks
- The Layer blueprint
- The Dense Layer
- The NeuralNetwork class, and maybe others
- Diagram
- Code
- Loss class
- Deep Learning From Scratch
- Implementing batch training
- NeuralNetwork: code
- Trainer and Optimizer
- Optimizer
- Description and code
- Trainer
- Trainer code
- Optimizer
- Putting everything together
- Our first Deep Learning model (from scratch)
- Conclusion and next steps
- 4. Extensions
- Some Intuition about Neural Networks
- The softmax cross entropy loss function
- Component #1: the softmax function
- Math
- Intuition
- Component #2: the cross entropy loss
- Math
- Intuition
- Code
- A note on activation functions
- The other extreme: the Rectified Linear Unit
- A happy medium: Tanh
- Component #1: the softmax function
- Experiments
- Data preprocessing
- Model
- Experiment: softmax cross entropy loss
- Momentum
- Intuition for momentum
- Implementing momentum in the Optimizer class
- Math
- Code
- Experiment: stochastic gradient descent with momentum
- Learning rate decay
- Types of learning rate decay
- Experiments: learning rate decay
- Weight initialization
- Math and code
- Experiments: weight initialization
- Dropout
- Dropout: definition
- Dropout: implementation
- Adjusting the rest of our framework to accommodate Dropout
- Experiments: dropout
- Conclusion
- 5. Convolutional Neural Networks
- Neural networks and representation learning
- A different architecture for image data
- The convolution operation
- The multi-channel convolution operation
- Convolutional Layers
- Implementation implications
- The differences between convolutional and fully connected layers
- Making predictions with convolutional layers: the Flatten layer
- Pooling layers
- Applying CNNs beyond images
- Implementing the multi-channel convolution operation
- The forward pass
- Diagrams and math
- Padding
- Code
- A note on stride
- Convolutions: the backward pass
- What should the gradient be?
- Computing the gradient of a 1D convolution
- Whats the general pattern?
- Computing the parameter gradient
- Coding this up
- Batches, 2D Convolutions, and Multiple Channels
- 1D convolutions with batches: forward pass
- 1D convolution with batches: backward pass
- 2D convolutions
- 2D convolutions: coding the forward pass
- 2D convolutions: coding the backward pass
- The last element: adding channels
- Forward pass
- Backward pass
- The forward pass
- Using this Operation to Train a CNN
- The Flatten Operation
- The full Conv2D Layer
- A note on speed, and an alternative implementation
- Experiments
- Conclusion
- Neural networks and representation learning
- 6. Recurrent Neural Networks
- The Key Limitation: handling branching
- Automatic differentiation
- Coding up gradient accumulation
- Automatic differentiation illustration
- Explaining what happened
- Coding up gradient accumulation
- Motivation for recurrent neural networks
- Introduction to Recurrent Neural Networks
- The first class for RNNs: RNNLayer
- The second class for RNNs: RNNNode
- Putting these two classes together
- The backward pass
- Accumulating gradients for the weights in an RNNs
- RNNs: the code
- The RNNLayer class
- Initialization
- The forward method
- The backward method
- The essential elements of RNNNodes
- Vanilla RNNNodes
- RNNNode: the code
- RNNNodes: the backward pass
- Limitations of vanilla RNNNodes
- One solution: GRUNodes
- GRUNodes: a diagram
- GRUNodes: the code
- LSTMNodes
- LSTMNode: diagram
- LSTMs: the code
- Data representation for a character-level RNN-based language model
- Other language modeling tasks
- Combining RNNLayer variants
- Putting this all together
- The RNNLayer class
- Conclusion
- 7. PyTorch
- PyTorch Tensors
- Deep Learning with PyTorch
- PyTorch elements: Model, Layer Optimizer, and Loss
- The inference flag
- Implementing neural network building blocks using PyTorch: DenseLayer
- Example: Boston Housing Prices Model in PyTorch
- PyTorch elements: Optimizer and Loss
- PyTorch elements: Trainer
- Tricks to optimize learning in PyTorch
- PyTorch elements: Model, Layer Optimizer, and Loss
- Convolutional neural networks in PyTorch
- DataLoader and transforms
- LSTMs in PyTorch
- Postscript: Unsupervised Learning via Autoencoders
- Representation Learning
- An approach for situations with no labels whatsoever
- Diagram
- Implementing an autoencoder in PyTorch
- A stronger test for unsupervised learning, and a solution
- Conclusion
- A. Appendix
- Matrix chain rule
- Gradient of the loss with respect to the bias terms
- Convolutions via matrix multiplication