reklama - zainteresowany?

Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play - Helion

Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play
ebook
Autor: David Foster
ISBN: 978-14-920-4189-4
stron: 330, Format: ebook
Data wydania: 2019-06-28
Księgarnia: Helion

Cena książki: 211,65 zł (poprzednio: 246,10 zł)
Oszczędzasz: 14% (-34,45 zł)

Dodaj do koszyka Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play

Generative modeling is one of the hottest topics in AI. It’s now possible to teach a machine to excel at human endeavors such as painting, writing, and composing music. With this practical book, machine-learning engineers and data scientists will discover how to re-create some of the most impressive examples of generative deep learning models, such as variational autoencoders,generative adversarial networks (GANs), encoder-decoder models and world models.

Author David Foster demonstrates the inner workings of each technique, starting with the basics of deep learning before advancing to some of the most cutting-edge algorithms in the field. Through tips and tricks, you’ll understand how to make your models learn more efficiently and become more creative.

  • Discover how variational autoencoders can change facial expressions in photos
  • Build practical GAN examples from scratch, including CycleGAN for style transfer and MuseGAN for music generation
  • Create recurrent generative models for text generation and learn how to improve the models using attention
  • Understand how generative models can help agents to accomplish tasks within a reinforcement learning setting
  • Explore the architecture of the Transformer (BERT, GPT-2) and image generation models such as ProGAN and StyleGAN

Dodaj do koszyka Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play

 

Osoby które kupowały "Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play

Spis treści

Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play eBook -- spis treści

  • Preface
    • Objective and Approach
    • Prerequisites
    • Other Resources
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
  • I. Introduction to Generative Deep Learning
  • 1. Generative Modeling
    • What Is Generative Modeling?
      • Generative Versus Discriminative Modeling
      • Advances in Machine Learning
      • The Rise of Generative Modeling
      • The Generative Modeling Framework
    • Probabilistic Generative Models
      • Hello Wrodl!
      • Your First Probabilistic Generative Model
      • Naive Bayes
      • Hello Wrodl! Continued
    • The Challenges of Generative Modeling
      • Representation Learning
    • Setting Up Your Environment
    • Summary
  • 2. Deep Learning
    • Structured and Unstructured Data
    • Deep Neural Networks
      • Keras and TensorFlow
    • Your First Deep Neural Network
      • Loading the Data
      • Building the Model
      • Compiling the Model
      • Training the Model
      • Evaluating the Model
    • Improving the Model
      • Convolutional Layers
      • Batch Normalization
      • Dropout Layers
      • Putting It All Together
    • Summary
  • 3. Variational Autoencoders
    • The Art Exhibition
    • Autoencoders
      • Your First Autoencoder
      • The Encoder
      • The Decoder
      • Joining the Encoder to the Decoder
      • Analysis of the Autoencoder
    • The Variational Art Exhibition
    • Building a Variational Autoencoder
      • The Encoder
      • The Loss Function
      • Analysis of the Variational Autoencoder
    • Using VAEs to Generate Faces
      • Training the VAE
      • Analysis of the VAE
      • Generating New Faces
      • Latent Space Arithmetic
      • Morphing Between Faces
    • Summary
  • 4. Generative Adversarial Networks
    • Ganimals
    • Introduction to GANs
    • Your First GAN
      • The Discriminator
      • The Generator
      • Training the GAN
    • GAN Challenges
      • Oscillating Loss
      • Mode Collapse
      • Uninformative Loss
      • Hyperparameters
      • Tackling the GAN Challenges
    • Wasserstein GAN
      • Wasserstein Loss
      • The Lipschitz Constraint
      • Weight Clipping
      • Training the WGAN
      • Analysis of the WGAN
    • WGAN-GP
      • The Gradient Penalty Loss
      • Analysis of WGAN-GP
    • Summary
  • II. Teaching Machines to Paint, Write, Compose, and Play
  • 5. Paint
    • Apples and Organges
    • CycleGAN
    • Your First CycleGAN
      • Overview
      • The Generators (U-Net)
      • The Discriminators
      • Compiling the CycleGAN
      • Training the CycleGAN
      • Analysis of the CycleGAN
    • Creating a CycleGAN to Paint Like Monet
      • The Generators (ResNet)
      • Analysis of the CycleGAN
    • Neural Style Transfer
      • Content Loss
      • Style Loss
      • Total Variance Loss
      • Running the Neural Style Transfer
      • Analysis of the Neural Style Transfer Model
    • Summary
  • 6. Write
    • The Literary Society for Troublesome Miscreants
    • Long Short-Term Memory Networks
    • Your First LSTM Network
      • Tokenization
      • Building the Dataset
      • The LSTM Architecture
      • The Embedding Layer
      • The LSTM Layer
      • The LSTM Cell
    • Generating New Text
    • RNN Extensions
      • Stacked Recurrent Networks
      • Gated Recurrent Units
      • Bidirectional Cells
    • EncoderDecoder Models
    • A Question and Answer Generator
      • A Question-Answer Dataset
      • Model Architecture
      • Inference
      • Model Results
    • Summary
  • 7. Compose
    • Preliminaries
      • Musical Notation
    • Your First Music-Generating RNN
      • Attention
      • Building an Attention Mechanism in Keras
      • Analysis of the RNN with Attention
      • Attention in EncoderDecoder Networks
      • Generating Polyphonic Music
    • The Musical Organ
    • Your First MuseGAN
    • The MuseGAN Generator
      • Chords, Style, Melody, and Groove
        • Chords
        • Style
        • Melody
        • Groove
      • The Bar Generator
      • Putting It All Together
    • The Critic
    • Analysis of the MuseGAN
    • Summary
  • 8. Play
    • Reinforcement Learning
      • OpenAI Gym
    • World Model Architecture
      • The Variational Autoencoder
      • The MDN-RNN
      • The Controller
    • Setup
    • Training Process Overview
    • Collecting Random Rollout Data
    • Training the VAE
      • The VAE Architecture
      • Exploring the VAE
        • The full model
        • The encoder models
        • The decoder model
    • Collecting Data to Train the RNN
    • Training the MDN-RNN
      • The MDN-RNN Architecture
      • Sampling the Next z and Reward from the MDN-RNN
      • The MDN-RNN Loss Function
    • Training the Controller
      • The Controller Architecture
      • CMA-ES
      • Parallelizing CMA-ES
      • Output from the Controller Training
    • In-Dream Training
      • In-Dream Training the Controller
      • Challenges of In-Dream Training
    • Summary
  • 9. The Future of Generative Modeling
    • Five Years of Progress
    • The Transformer
      • Positional Encoding
      • Multihead Attention
      • The Decoder
      • Analysis of the Transformer
      • BERT
      • GPT-2
      • MuseNet
    • Advances in Image Generation
      • ProGAN
      • Self-Attention GAN (SAGAN)
      • BigGAN
      • StyleGAN
    • Applications of Generative Modeling
      • AI Art
      • AI Music
  • 10. Conclusion
  • Index

Dodaj do koszyka Generative Deep Learning. Teaching Machines to Paint, Write, Compose, and Play

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.