reklama - zainteresowany?

Fundamentals of Deep Learning. 2nd Edition - Helion

Fundamentals of Deep Learning. 2nd Edition
ebook
Autor: Nithin Buduma, Nikhil Buduma, Joe Papa
ISBN: 9781492082132
stron: 390, Format: ebook
Data wydania: 2022-05-16
Księgarnia: Helion

Cena książki: 211,65 zł (poprzednio: 246,10 zł)
Oszczędzasz: 14% (-34,45 zł)

Dodaj do koszyka Fundamentals of Deep Learning. 2nd Edition

We're in the midst of an AI research explosion. Deep learning has unlocked superhuman perception to power our push toward creating self-driving vehicles, defeating human experts at a variety of difficult games including Go, and even generating essays with shockingly coherent prose. But deciphering these breakthroughs often takes a PhD in machine learning and mathematics.

The updated second edition of this book describes the intuition behind these innovations without jargon or complexity. Python-proficient programmers, software engineering professionals, and computer science majors will be able to re-implement these breakthroughs on their own and reason about them with a level of sophistication that rivals some of the best developers in the field.

  • Learn the mathematics behind machine learning jargon
  • Examine the foundations of machine learning and neural networks
  • Manage problems that arise as you begin to make networks deeper
  • Build neural networks that analyze complex images
  • Perform effective dimensionality reduction using autoencoders
  • Dive deep into sequence analysis to examine language
  • Explore methods in interpreting complex machine learning models
  • Gain theoretical and practical knowledge on generative modeling
  • Understand the fundamentals of reinforcement learning

Dodaj do koszyka Fundamentals of Deep Learning. 2nd Edition

 

Osoby które kupowały "Fundamentals of Deep Learning. 2nd Edition", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka Fundamentals of Deep Learning. 2nd Edition

Spis treści

Fundamentals of Deep Learning. 2nd Edition eBook -- spis treści

  • Preface
    • Prerequisites and Objectives
    • How Is This Book Organized?
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgements
      • Nithin and Nikhil
      • Joe
  • 1. Fundamentals of Linear Algebra for Deep Learning
    • Data Structures and Operations
      • Matrix Operations
      • Vector Operations
      • Matrix-Vector Multiplication
    • The Fundamental Spaces
      • The Column Space
      • The Null Space
    • Eigenvectors and Eigenvalues
    • Summary
  • 2. Fundamentals of Probability
    • Events and Probability
    • Conditional Probability
    • Random Variables
    • Expectation
    • Variance
    • Bayes Theorem
    • Entropy, Cross Entropy, and KL Divergence
    • Continuous Probability Distributions
    • Summary
  • 3. The Neural Network
    • Building Intelligent Machines
    • The Limits of Traditional Computer Programs
    • The Mechanics of Machine Learning
    • The Neuron
    • Expressing Linear Perceptrons as Neurons
    • Feed-Forward Neural Networks
    • Linear Neurons and Their Limitations
    • Sigmoid, Tanh, and ReLU Neurons
    • Softmax Output Layers
    • Summary
  • 4. Training Feed-Forward Neural Networks
    • The Fast-Food Problem
    • Gradient Descent
    • The Delta Rule and Learning Rates
    • Gradient Descent with Sigmoidal Neurons
    • The Backpropagation Algorithm
    • Stochastic and Minibatch Gradient Descent
    • Test Sets, Validation Sets, and Overfitting
    • Preventing Overfitting in Deep Neural Networks
    • Summary
  • 5. Implementing Neural Networks in PyTorch
    • Introduction to PyTorch
    • Installing PyTorch
    • PyTorch Tensors
      • Tensor Init
      • Tensor Attributes
      • Tensor Operations
    • Gradients in PyTorch
    • The PyTorch nn Module
    • PyTorch Datasets and Dataloaders
    • Building the MNIST Classifier in PyTorch
    • Summary
  • 6. Beyond Gradient Descent
    • The Challenges with Gradient Descent
    • Local Minima in the Error Surfaces of Deep Networks
    • Model Identifiability
    • How Pesky Are Spurious Local Minima in Deep Networks?
    • Flat Regions in the Error Surface
    • When the Gradient Points in the Wrong Direction
    • Momentum-Based Optimization
    • A Brief View of Second-Order Methods
    • Learning Rate Adaptation
      • AdaGradAccumulating Historical Gradients
      • RMSPropExponentially Weighted Moving Average of Gradients
      • AdamCombining Momentum and RMSProp
    • The Philosophy Behind Optimizer Selection
    • Summary
  • 7. Convolutional Neural Networks
    • Neurons in Human Vision
    • The Shortcomings of Feature Selection
    • Vanilla Deep Neural Networks Dont Scale
    • Filters and Feature Maps
    • Full Description of the Convolutional Layer
    • Max Pooling
    • Full Architectural Description of Convolution Networks
    • Closing the Loop on MNIST with Convolutional Networks
    • Image Preprocessing Pipelines Enable More Robust Models
    • Accelerating Training with Batch Normalization
    • Group Normalization for Memory Constrained Learning Tasks
    • Building a Convolutional Network for CIFAR-10
    • Visualizing Learning in Convolutional Networks
    • Residual Learning and Skip Connections for Very Deep Networks
    • Building a Residual Network with Superhuman Vision
    • Leveraging Convolutional Filters to Replicate Artistic Styles
    • Learning Convolutional Filters for Other Problem Domains
    • Summary
  • 8. Embedding and Representation Learning
    • Learning Lower-Dimensional Representations
    • Principal Component Analysis
    • Motivating the Autoencoder Architecture
    • Implementing an Autoencoder in PyTorch
    • Denoising to Force Robust Representations
    • Sparsity in Autoencoders
    • When Context Is More Informative than the Input Vector
    • The Word2Vec Framework
    • Implementing the Skip-Gram Architecture
    • Summary
  • 9. Models for Sequence Analysis
    • Analyzing Variable-Length Inputs
    • Tackling seq2seq with Neural N-Grams
    • Implementing a Part-of-Speech Tagger
    • Dependency Parsing and SyntaxNet
    • Beam Search and Global Normalization
    • A Case for Stateful Deep Learning Models
    • Recurrent Neural Networks
    • The Challenges with Vanishing Gradients
    • Long Short-Term Memory Units
    • PyTorch Primitives for RNN Models
    • Implementing a Sentiment Analysis Model
    • Solving seq2seq Tasks with Recurrent Neural Networks
    • Augmenting Recurrent Networks with Attention
    • Dissecting a Neural Translation Network
    • Self-Attention and Transformers
    • Summary
  • 10. Generative Models
    • Generative Adversarial Networks
    • Variational Autoencoders
    • Implementing a VAE
    • Score-Based Generative Models
    • Denoising Autoencoders and Score Matching
    • Summary
  • 11. Methods in Interpretability
    • Overview
    • Decision Trees and Tree-Based Algorithms
    • Linear Regression
    • Methods for Evaluating Feature Importance
      • Permutation Feature Importance
      • Partial Dependence Plots
    • Extractive Rationalization
    • LIME
    • SHAP
    • Summary
  • 12. Memory Augmented Neural Networks
    • Neural Turing Machines
    • Attention-Based Memory Access
    • NTM Memory Addressing Mechanisms
    • Differentiable Neural Computers
    • Interference-Free Writing in DNCs
    • DNC Memory Reuse
    • Temporal Linking of DNC Writes
    • Understanding the DNC Read Head
    • The DNC Controller Network
    • Visualizing the DNC in Action
    • Implementing the DNC in PyTorch
    • Teaching a DNC to Read and Comprehend
    • Summary
  • 13. Deep Reinforcement Learning
    • Deep Reinforcement Learning Masters Atari Games
    • What Is Reinforcement Learning?
    • Markov Decision Processes
      • Policy
      • Future Return
      • Discounted Future Return
    • Explore Versus Exploit
      • -Greedy
      • Annealed -Greedy
    • Policy Versus Value Learning
    • Pole-Cart with Policy Gradients
      • OpenAI Gym
      • Creating an Agent
      • Building the Model and Optimizer
      • Sampling Actions
      • Keeping Track of History
      • Policy Gradient Main Function
      • PGAgent Performance on Pole-Cart
    • Trust-Region Policy Optimization
    • Proximal Policy Optimization
    • Q-Learning and Deep Q-Networks
      • The Bellman Equation
      • Issues with Value Iteration
      • Approximating the Q-Function
      • Deep Q-Network
      • Training DQN
      • Learning Stability
      • Target Q-Network
      • Experience Replay
      • From Q-Function to Policy
      • DQN and the Markov Assumption
      • DQNs Solution to the Markov Assumption
      • Playing Breakout with DQN
      • Building Our Architecture
      • Stacking Frames
      • Setting Up Training Operations
      • Updating Our Target Q-Network
      • Implementing Experience Replay
      • DQN Main Loop
      • DQNAgent Results on Breakout
    • Improving and Moving Beyond DQN
      • Deep Recurrent Q-Networks
      • Asynchronous Advantage Actor-Critic Agent
      • UNsupervised REinforcement and Auxiliary Learning
    • Summary
  • Index

Dodaj do koszyka Fundamentals of Deep Learning. 2nd Edition

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.