reklama - zainteresowany?

Deep Learning for Coders with fastai and PyTorch - Helion

Deep Learning for Coders with fastai and PyTorch
ebook
Autor: Jeremy Howard, Sylvain Gugger
ISBN: 9781492045472
stron: 624, Format: ebook
Data wydania: 2020-06-29
Księgarnia: Helion

Cena książki: 211,65 zł (poprzednio: 246,10 zł)
Oszczędzasz: 14% (-34,45 zł)

Dodaj do koszyka Deep Learning for Coders with fastai and PyTorch

Deep learning is often viewed as the exclusive domain of math PhDs and big tech companies. But as this hands-on guide demonstrates, programmers comfortable with Python can achieve impressive results in deep learning with little math background, small amounts of data, and minimal code. How? With fastai, the first library to provide a consistent interface to the most frequently used deep learning applications.

Authors Jeremy Howard and Sylvain Gugger, the creators of fastai, show you how to train a model on a wide range of tasks using fastai and PyTorch. You’ll also dive progressively further into deep learning theory to gain a complete understanding of the algorithms behind the scenes.

  • Train models in computer vision, natural language processing, tabular data, and collaborative filtering
  • Learn the latest deep learning techniques that matter most in practice
  • Improve accuracy, speed, and reliability by understanding how deep learning models work
  • Discover how to turn your models into web applications
  • Implement deep learning algorithms from scratch
  • Consider the ethical implications of your work
  • Gain insight from the foreword by PyTorch cofounder, Soumith Chintala

Dodaj do koszyka Deep Learning for Coders with fastai and PyTorch

 

Osoby które kupowały "Deep Learning for Coders with fastai and PyTorch", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka Deep Learning for Coders with fastai and PyTorch

Spis treści

Deep Learning for Coders with fastai and PyTorch eBook -- spis treści

  • Preface
    • Who This Book Is For
    • What You Need to Know
    • What You Will Learn
    • OReilly Online Learning
    • How to Contact Us
  • Foreword
  • I. Deep Learning in Practice
  • 1. Your Deep Learning Journey
    • Deep Learning Is for Everyone
    • Neural Networks: A Brief History
    • Who We Are
    • How to Learn Deep Learning
      • Your Projects and Your Mindset
    • The Software: PyTorch, fastai, and Jupyter (And Why It Doesnt Matter)
    • Your First Model
      • Getting a GPU Deep Learning Server
      • Running Your First Notebook
      • What Is Machine Learning?
      • What Is a Neural Network?
      • A Bit of Deep Learning Jargon
      • Limitations Inherent to Machine Learning
      • How Our Image Recognizer Works
      • What Our Image Recognizer Learned
      • Image Recognizers Can Tackle Non-Image Tasks
      • Jargon Recap
    • Deep Learning Is Not Just for Image Classification
    • Validation Sets and Test Sets
      • Use Judgment in Defining Test Sets
    • A Choose Your Own Adventure Moment
    • Questionnaire
      • Further Research
  • 2. From Model to Production
    • The Practice of Deep Learning
      • Starting Your Project
      • The State of Deep Learning
        • Computer vision
        • Text (natural language processing)
        • Combining text and images
        • Tabular data
        • Recommendation systems
        • Other data types
      • The Drivetrain Approach
    • Gathering Data
    • From Data to DataLoaders
      • Data Augmentation
    • Training Your Model, and Using It to Clean Your Data
    • Turning Your Model into an Online Application
      • Using the Model for Inference
      • Creating a Notebook App from the Model
      • Turning Your Notebook into a Real App
      • Deploying Your App
    • How to Avoid Disaster
      • Unforeseen Consequences and Feedback Loops
    • Get Writing!
    • Questionnaire
      • Further Research
  • 3. Data Ethics
    • Key Examples for Data Ethics
      • Bugs and Recourse: Buggy Algorithm Used for Healthcare Benefits
      • Feedback Loops: YouTubes Recommendation System
      • Bias: Professor Latanya Sweeney Arrested
      • Why Does This Matter?
    • Integrating Machine Learning with Product Design
    • Topics in Data Ethics
      • Recourse and Accountability
      • Feedback Loops
      • Bias
        • Historical bias
        • Measurement bias
        • Aggregation bias
        • Representation bias
        • Addressing different types of bias
      • Disinformation
    • Identifying and Addressing Ethical Issues
      • Analyze a Project You Are Working On
      • Processes to Implement
        • Ethical lenses
      • The Power of Diversity
      • Fairness, Accountability, and Transparency
    • Role of Policy
      • The Effectiveness of Regulation
      • Rights and Policy
      • Cars: A Historical Precedent
    • Conclusion
    • Questionnaire
      • Further Research
    • Deep Learning in Practice: Thats a Wrap!
  • II. Understanding fastais Applications
  • 4. Under the Hood: Training a Digit Classifier
    • Pixels: The Foundations of Computer Vision
    • First Try: Pixel Similarity
      • NumPy Arrays and PyTorch Tensors
    • Computing Metrics Using Broadcasting
    • Stochastic Gradient Descent
      • Calculating Gradients
      • Stepping with a Learning Rate
      • An End-to-End SGD Example
        • Step 1: Initialize the parameters
        • Step 2: Calculate the predictions
        • Step 3: Calculate the loss
        • Step 4: Calculate the gradients
        • Step 5: Step the weights
        • Step 6: Repeat the process
        • Step 7: Stop
      • Summarizing Gradient Descent
    • The MNIST Loss Function
      • Sigmoid
      • SGD and Mini-Batches
    • Putting It All Together
      • Creating an Optimizer
    • Adding a Nonlinearity
      • Going Deeper
    • Jargon Recap
    • Questionnaire
      • Further Research
  • 5. Image Classification
    • From Dogs and Cats to Pet Breeds
    • Presizing
      • Checking and Debugging a DataBlock
    • Cross-Entropy Loss
      • Viewing Activations and Labels
      • Softmax
      • Log Likelihood
      • Taking the log
    • Model Interpretation
    • Improving Our Model
      • The Learning Rate Finder
      • Unfreezing and Transfer Learning
      • Discriminative Learning Rates
      • Selecting the Number of Epochs
      • Deeper Architectures
    • Conclusion
    • Questionnaire
      • Further Research
  • 6. Other Computer Vision Problems
    • Multi-Label Classification
      • The Data
      • Constructing a DataBlock
      • Binary Cross Entropy
    • Regression
      • Assembling the Data
      • Training a Model
    • Conclusion
    • Questionnaire
      • Further Research
  • 7. Training a State-of-the-Art Model
    • Imagenette
    • Normalization
    • Progressive Resizing
    • Test Time Augmentation
    • Mixup
    • Label Smoothing
    • Conclusion
    • Questionnaire
      • Further Research
  • 8. Collaborative Filtering Deep Dive
    • A First Look at the Data
    • Learning the Latent Factors
    • Creating the DataLoaders
    • Collaborative Filtering from Scratch
      • Weight Decay
      • Creating Our Own Embedding Module
    • Interpreting Embeddings and Biases
      • Using fastai.collab
      • Embedding Distance
    • Bootstrapping a Collaborative Filtering Model
    • Deep Learning for Collaborative Filtering
    • Conclusion
    • Questionnaire
      • Further Research
  • 9. Tabular Modeling Deep Dive
    • Categorical Embeddings
    • Beyond Deep Learning
    • The Dataset
      • Kaggle Competitions
      • Look at the Data
    • Decision Trees
      • Handling Dates
      • Using TabularPandas and TabularProc
      • Creating the Decision Tree
      • Categorical Variables
    • Random Forests
      • Creating a Random Forest
      • Out-of-Bag Error
    • Model Interpretation
      • Tree Variance for Prediction Confidence
      • Feature Importance
      • Removing Low-Importance Variables
      • Removing Redundant Features
      • Partial Dependence
      • Data Leakage
      • Tree Interpreter
    • Extrapolation and Neural Networks
      • The Extrapolation Problem
      • Finding Out-of-Domain Data
      • Using a Neural Network
    • Ensembling
      • Boosting
      • Combining Embeddings with Other Methods
    • Conclusion
    • Questionnaire
      • Further Research
  • 10. NLP Deep Dive: RNNs
    • Text Preprocessing
      • Tokenization
      • Word Tokenization with fastai
      • Subword Tokenization
      • Numericalization with fastai
      • Putting Our Texts into Batches for a Language Model
    • Training a Text Classifier
      • Language Model Using DataBlock
      • Fine-Tuning the Language Model
      • Saving and Loading Models
      • Text Generation
      • Creating the Classifier DataLoaders
      • Fine-Tuning the Classifier
    • Disinformation and Language Models
    • Conclusion
    • Questionnaire
      • Further Research
  • 11. Data Munging with fastais Mid-Level API
    • Going Deeper into fastais Layered API
      • Transforms
      • Writing Your Own Transform
      • Pipeline
    • TfmdLists and Datasets: Transformed Collections
      • TfmdLists
      • Datasets
    • Applying the Mid-Level Data API: SiamesePair
    • Conclusion
    • Questionnaire
      • Further Research
    • Understanding fastais Applications: Wrap Up
  • III. Foundations of Deep Learning
  • 12. A Language Model from Scratch
    • The Data
    • Our First Language Model from Scratch
      • Our Language Model in PyTorch
      • Our First Recurrent Neural Network
    • Improving the RNN
      • Maintaining the State of an RNN
      • Creating More Signal
    • Multilayer RNNs
      • The Model
      • Exploding or Disappearing Activations
    • LSTM
      • Building an LSTM from Scratch
      • Training a Language Model Using LSTMs
    • Regularizing an LSTM
      • Dropout
      • Activation Regularization and Temporal Activation Regularization
      • Training a Weight-Tied Regularized LSTM
    • Conclusion
    • Questionnaire
      • Further Research
  • 13. Convolutional Neural Networks
    • The Magic of Convolutions
      • Mapping a Convolutional Kernel
      • Convolutions in PyTorch
      • Strides and Padding
      • Understanding the Convolution Equations
    • Our First Convolutional Neural Network
      • Creating the CNN
      • Understanding Convolution Arithmetic
      • Receptive Fields
      • A Note About Twitter
    • Color Images
    • Improving Training Stability
      • A Simple Baseline
      • Increase Batch Size
      • 1cycle Training
      • Batch Normalization
    • Conclusion
    • Questionnaire
      • Further Research
  • 14. ResNets
    • Going Back to Imagenette
    • Building a Modern CNN: ResNet
      • Skip Connections
      • A State-of-the-Art ResNet
      • Bottleneck Layers
    • Conclusion
    • Questionnaire
      • Further Research
  • 15. Application Architectures Deep Dive
    • Computer Vision
      • cnn_learner
      • unet_learner
      • A Siamese Network
    • Natural Language Processing
    • Tabular
    • Conclusion
    • Questionnaire
      • Further Research
  • 16. The Training Process
    • Establishing a Baseline
    • A Generic Optimizer
    • Momentum
    • RMSProp
    • Adam
    • Decoupled Weight Decay
    • Callbacks
      • Creating a Callback
      • Callback Ordering and Exceptions
    • Conclusion
    • Questionnaire
      • Further Research
    • Foundations of Deep Learning: Wrap Up
  • IV. Deep Learning from Scratch
  • 17. A Neural Net from the Foundations
    • Building a Neural Net Layer from Scratch
      • Modeling a Neuron
      • Matrix Multiplication from Scratch
      • Elementwise Arithmetic
      • Broadcasting
        • Broadcasting with a scalar
        • Broadcasting a vector to a matrix
        • Broadcasting rules
      • Einstein Summation
    • The Forward and Backward Passes
      • Defining and Initializing a Layer
      • Gradients and the Backward Pass
      • Refactoring the Model
      • Going to PyTorch
    • Conclusion
    • Questionnaire
      • Further Research
  • 18. CNN Interpretation with CAM
    • CAM and Hooks
    • Gradient CAM
    • Conclusion
    • Questionnaire
      • Further Research
  • 19. A fastai Learner from Scratch
    • Data
      • Dataset
    • Module and Parameter
      • Simple CNN
    • Loss
    • Learner
      • Callbacks
      • Scheduling the Learning Rate
    • Conclusion
    • Questionnaire
      • Further Research
  • 20. Concluding Thoughts
  • A. Creating a Blog
    • Blogging with GitHub Pages
      • Creating the Repository
      • Setting Up Your Home Page
      • Creating Posts
      • Synchronizing GitHub and Your Computer
    • Jupyter for Blogging
  • B. Data Project Checklist
    • Data Scientists
    • Strategy
    • Data
    • Analytics
    • Implementation
    • Maintenance
    • Constraints
  • Index

Dodaj do koszyka Deep Learning for Coders with fastai and PyTorch

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.