reklama - zainteresowany?

TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning - Helion

TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning
ebook
Autor: Bharath Ramsundar, Reza Bosagh Zadeh
ISBN: 978-14-919-8040-8
stron: 256, Format: ebook
Data wydania: 2018-03-01
Księgarnia: Helion

Cena książki: 228,65 zł (poprzednio: 265,87 zł)
Oszczędzasz: 14% (-37,22 zł)

Dodaj do koszyka TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning

Tagi: Uczenie maszynowe

Learn how to solve challenging machine learning problems with TensorFlow, Google’s revolutionary new software library for deep learning. If you have some background in basic linear algebra and calculus, this practical book introduces machine-learning fundamentals by showing you how to design systems capable of detecting objects in images, understanding text, analyzing video, and predicting the properties of potential medicines.

TensorFlow for Deep Learning teaches concepts through practical examples and helps you build knowledge of deep learning foundations from the ground up. It’s ideal for practicing developers with experience designing software systems, and useful for scientists and other professionals familiar with scripting but not necessarily with designing learning algorithms.

  • Learn TensorFlow fundamentals, including how to perform basic computation
  • Build simple learning systems to understand their mathematical foundations
  • Dive into fully connected deep networks used in thousands of applications
  • Turn prototypes into high-quality models with hyperparameter optimization
  • Process images with convolutional neural networks
  • Handle natural language datasets with recurrent neural networks
  • Use reinforcement learning to solve games such as tic-tac-toe
  • Train deep networks with hardware including GPUs and tensor processing units

Dodaj do koszyka TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning

 

Osoby które kupowały "TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning", wybierały także:

  • Data Science w Pythonie. Kurs video. Przetwarzanie i analiza danych
  • Matematyka w deep learningu. Co musisz wiedzie
  • Dylemat sztucznej inteligencji. 7 zasad odpowiedzialnego tworzenia technologii
  • Eksploracja danych za pomoc
  • Podr

Dodaj do koszyka TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning

Spis treści

TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning eBook -- spis treści

  • Preface
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Safari
    • How to Contact Us
    • Acknowledgments
  • 1. Introduction to Deep Learning
    • Machine Learning Eats Computer Science
    • Deep Learning Primitives
      • Fully Connected Layer
      • Convolutional Layer
      • Recurrent Neural Network Layers
      • Long Short-Term Memory Cells
    • Deep Learning Architectures
      • LeNet
      • AlexNet
      • ResNet
      • Neural Captioning Model
      • Google Neural Machine Translation
      • One-Shot Models
      • AlphaGo
      • Generative Adversarial Networks
      • Neural Turing Machines
    • Deep Learning Frameworks
      • Limitations of TensorFlow
    • Review
  • 2. Introduction to TensorFlow Primitives
    • Introducing Tensors
      • Scalars, Vectors, and Matrices
      • Matrix Mathematics
      • Tensors
      • Tensors in Physics
      • Mathematical Asides
    • Basic Computations in TensorFlow
      • Installing TensorFlow and Getting Started
      • Initializing Constant Tensors
      • Sampling Random Tensors
      • Tensor Addition and Scaling
      • Matrix Operations
      • Tensor Types
      • Tensor Shape Manipulations
      • Introduction to Broadcasting
    • Imperative and Declarative Programming
      • TensorFlow Graphs
      • TensorFlow Sessions
      • TensorFlow Variables
    • Review
  • 3. Linear and Logistic Regression with TensorFlow
    • Mathematical Review
      • Functions and Differentiability
      • Loss Functions
        • Classification and regression
        • L2 Loss
        • Probability distributions
        • Cross-entropy loss
      • Gradient Descent
      • Automatic Differentiation Systems
    • Learning with TensorFlow
      • Creating Toy Datasets
        • An (extremely) brief introduction to NumPy
        • Why are toy datasets important?
        • Adding noise with Gaussians
        • Toy regression datasets
        • Toy classification datasets
      • New TensorFlow Concepts
        • Placeholders
        • Feed dictionaries and Fetches
        • Name scopes
        • Optimizers
        • Taking gradients with TensorFlow
        • Summaries and file writers for TensorBoard
        • Training models with TensorFlow
    • Training Linear and Logistic Models in TensorFlow
      • Linear Regression in TensorFlow
        • Defining and training linear regression in TensorFlow
        • Visualizing linear regression models with TensorBoard
        • Metrics for evaluating regression models
      • Logistic Regression in TensorFlow
        • Visualizing logistic regression models with TensorBoard
        • Metrics for evaluating classification models
    • Review
  • 4. Fully Connected Deep Networks
    • What Is a Fully Connected Deep Network?
    • Neurons in Fully Connected Networks
      • Learning Fully Connected Networks with Backpropagation
      • Universal Convergence Theorem
      • Why Deep Networks?
    • Training Fully Connected Neural Networks
      • Learnable Representations
      • Activations
      • Fully Connected Networks Memorize
      • Regularization
        • Dropout
        • Early stopping
        • Weight regularization
      • Training Fully Connected Networks
        • Minibatching
        • Learning rates
    • Implementation in TensorFlow
      • Installing DeepChem
      • Tox21 Dataset
      • Accepting Minibatches of Placeholders
      • Implementing a Hidden Layer
      • Adding Dropout to a Hidden Layer
      • Implementing Minibatching
      • Evaluating Model Accuracy
      • Using TensorBoard to Track Model Convergence
    • Review
  • 5. Hyperparameter Optimization
    • Model Evaluation and Hyperparameter Optimization
    • Metrics, Metrics, Metrics
      • Binary Classification Metrics
      • Multiclass Classification Metrics
      • Regression Metrics
    • Hyperparameter Optimization Algorithms
      • Setting Up a Baseline
      • Graduate Student Descent
      • Grid Search
      • Random Hyperparameter Search
      • Challenge for the Reader
    • Review
  • 6. Convolutional Neural Networks
    • Introduction to Convolutional Architectures
      • Local Receptive Fields
      • Convolutional Kernels
      • Pooling Layers
      • Constructing Convolutional Networks
      • Dilated Convolutions
    • Applications of Convolutional Networks
      • Object Detection and Localization
      • Image Segmentation
      • Graph Convolutions
      • Generating Images with Variational Autoencoders
        • Adversarial models
    • Training a Convolutional Network in TensorFlow
      • The MNIST Dataset
      • Loading MNIST
      • TensorFlow Convolutional Primitives
      • The Convolutional Architecture
      • Evaluating Trained Models
      • Challenge for the Reader
    • Review
  • 7. Recurrent Neural Networks
    • Overview of Recurrent Architectures
    • Recurrent Cells
      • Long Short-Term Memory (LSTM)
      • Gated Recurrent Units (GRU)
    • Applications of Recurrent Models
      • Sampling from Recurrent Networks
      • Seq2seq Models
    • Neural Turing Machines
    • Working with Recurrent Neural Networks in Practice
    • Processing the Penn Treebank Corpus
      • Code for Preprocessing
      • Loading Data into TensorFlow
      • The Basic Recurrent Architecture
      • Challenge for the Reader
    • Review
  • 8. Reinforcement Learning
    • Markov Decision Processes
    • Reinforcement Learning Algorithms
      • Q-Learning
      • Policy Learning
      • Asynchronous Training
    • Limits of Reinforcement Learning
    • Playing Tic-Tac-Toe
      • Object Orientation
      • Abstract Environment
      • Tic-Tac-Toe Environment
      • The Layer Abstraction
      • Defining a Graph of Layers
    • The A3C Algorithm
      • The A3C Loss Function
      • Defining Workers
        • Worker rollouts
      • Training the Policy
      • Challenge for the Reader
    • Review
  • 9. Training Large Deep Networks
    • Custom Hardware for Deep Networks
    • CPU Training
      • GPU Training
      • Tensor Processing Units
      • Field Programmable Gate Arrays
      • Neuromorphic Chips
    • Distributed Deep Network Training
      • Data Parallelism
      • Model Parallelism
    • Data Parallel Training with Multiple GPUs on Cifar10
      • Downloading and Loading the DATA
      • Deep Dive on the Architecture
      • Training on Multiple GPUs
      • Challenge for the Reader
    • Review
  • 10. The Future of Deep Learning
    • Deep Learning Outside the Tech Industry
      • Deep Learning in the Pharmaceutical Industry
      • Deep Learning in Law
      • Deep Learning for Robotics
      • Deep Learning in Agriculture
    • Using Deep Learning Ethically
    • Is Artificial General Intelligence Imminent?
    • Where to Go from Here?
  • Index

Dodaj do koszyka TensorFlow for Deep Learning. From Linear Regression to Reinforcement Learning

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.