Deep Learning Cookbook. Practical Recipes to Get Started Quickly - Helion
ISBN: 978-14-919-9579-2
stron: 252, Format: ebook
Data wydania: 2018-06-05
Księgarnia: Helion
Cena książki: 186,15 zł (poprzednio: 216,45 zł)
Oszczędzasz: 14% (-30,30 zł)
Deep learning doesn’t have to be intimidating. Until recently, this machine-learning method required years of study, but with frameworks such as Keras and Tensorflow, software engineers without a background in machine learning can quickly enter the field. With the recipes in this cookbook, you’ll learn how to solve deep-learning problems for classifying and generating text, images, and music.
Each chapter consists of several recipes needed to complete a single project, such as training a music recommending system. Author Douwe Osinga also provides a chapter with half a dozen techniques to help you if you’re stuck. Examples are written in Python with code available on GitHub as a set of Python notebooks.
You’ll learn how to:
- Create applications that will serve real users
- Use word embeddings to calculate text similarity
- Build a movie recommender system based on Wikipedia links
- Learn how AIs see the world by visualizing their internal state
- Build a model to suggest emojis for pieces of text
- Reuse pretrained networks to build an inverse image search service
- Compare how GANs, autoencoders and LSTMs generate icons
- Detect music styles and index song collections
Osoby które kupowały "Deep Learning Cookbook. Practical Recipes to Get Started Quickly", wybierały także:
- Windows Media Center. Domowe centrum rozrywki 66,67 zł, (8,00 zł -88%)
- Ruby on Rails. Ćwiczenia 18,75 zł, (3,00 zł -84%)
- Przywództwo w świecie VUCA. Jak być skutecznym liderem w niepewnym środowisku 58,64 zł, (12,90 zł -78%)
- Scrum. O zwinnym zarządzaniu projektami. Wydanie II rozszerzone 58,64 zł, (12,90 zł -78%)
- Od hierarchii do turkusu, czyli jak zarządzać w XXI wieku 58,64 zł, (12,90 zł -78%)
Spis treści
Deep Learning Cookbook. Practical Recipes to Get Started Quickly eBook -- spis treści
- Preface
- A Brief History of Deep Learning
- Why Now?
- What Do You Need to Know?
- How This Book Is Structured
- Conventions Used in This Book
- Accompanying Code
- OReilly Safari
- How to Contact Us
- Acknowledgments
- 1. Tools and Techniques
- 1.1. Types of Neural Networks
- 1.2. Acquiring Data
- 1.3. Preprocessing Data
- 2. Getting Unstuck
- 2.1. Determining That You Are Stuck
- 2.2. Solving Runtime Errors
- 2.3. Checking Intermediate Results
- 2.4. Picking the Right Activation Function (for Your Final Layer)
- 2.5. Regularization and Dropout
- 2.6. Network Structure, Batch Size, and Learning Rate
- 3. Calculating Text Similarity Using Word Embeddings
- 3.1. Using Pretrained Word Embeddings to Find Word Similarity
- 3.2. Word2vec Math
- 3.3. Visualizing Word Embeddings
- 3.4. Finding Entity Classes in Embeddings
- 3.5. Calculating Semantic Distances Inside a Class
- 3.6. Visualizing Country Data on a Map
- 4. Building a Recommender System Based on Outgoing Wikipedia Links
- 4.1. Collecting the Data
- 4.2. Training Movie Embeddings
- 4.3. Building a Movie Recommender
- 4.4. Predicting Simple Movie Properties
- 5. Generating Text in the Style of an Example Text
- 5.1. Acquiring the Text of Public Domain Books
- 5.2. Generating Shakespeare-Like Texts
- 5.3. Writing Code Using RNNs
- 5.4. Controlling the Temperature of the Output
- 5.5. Visualizing Recurrent Network Activations
- 6. Question Matching
- 6.1. Acquiring Data from Stack Exchange
- 6.2. Exploring Data Using Pandas
- 6.3. Using Keras to Featurize Text
- 6.4. Building a Question/Answer Model
- 6.5. Training a Model with Pandas
- 6.6. Checking Similarities
- 7. Suggesting Emojis
- 7.1. Building a Simple Sentiment Classifier
- 7.2. Inspecting a Simple Classifier
- 7.3. Using a Convolutional Network for Sentiment Analysis
- 7.4. Collecting Twitter Data
- 7.5. A Simple Emoji Predictor
- 7.6. Dropout and Multiple Windows
- 7.7. Building a Word-Level Model
- 7.8. Constructing Your Own Embeddings
- 7.9. Using a Recurrent Neural Network for Classification
- 7.10. Visualizing (Dis)Agreement
- 7.11. Combining Models
- 8. Sequence-to-Sequence Mapping
- 8.1. Training a Simple Sequence-to-Sequence Model
- 8.2. Extracting Dialogue from Texts
- 8.3. Handling an Open Vocabulary
- 8.4. Training a seq2seq Chatbot
- 9. Reusing a Pretrained Image Recognition Network
- 9.1. Loading a Pretrained Network
- 9.2. Preprocessing Images
- 9.3. Running Inference on Images
- 9.4. Using the Flickr API to Collect a Set of Labeled Images
- 9.5. Building a Classifier That Can Tell Cats from Dogs
- 9.6. Improving Search Results
- 9.7. Retraining Image Recognition Networks
- 10. Building an Inverse Image Search Service
- 10.1. Acquiring Images from Wikipedia
- 10.2. Projecting Images into an N-Dimensional Space
- 10.3. Finding Nearest Neighbors in High-Dimensional Spaces
- 10.4. Exploring Local Neighborhoods in Embeddings
- 11. Detecting Multiple Images
- 11.1. Detecting Multiple Images Using a Pretrained Classifier
- 11.2. Using Faster RCNN for Object Detection
- 11.3. Running Faster RCNN over Our Own Images
- 12. Image Style
- 12.1. Visualizing CNN Activations
- 12.2. Octaves and Scaling
- 12.3. Visualizing What a Neural Network Almost Sees
- 12.4. Capturing the Style of an Image
- 12.5. Improving the Loss Function to Increase Image Coherence
- 12.6. Transferring the Style to a Different Image
- 12.7. Style Interpolation
- 13. Generating Images with Autoencoders
- 13.1. Importing Drawings from Google Quick Draw
- 13.2. Creating an Autoencoder for Images
- 13.3. Visualizing Autoencoder Results
- 13.4. Sampling Images from a Correct Distribution
- 13.5. Visualizing a Variational Autoencoder Space
- 13.6. Conditional Variational Autoencoders
- 14. Generating Icons Using Deep Nets
- 14.1. Acquiring Icons for Training
- 14.2. Converting the Icons to a Tensor Representation
- 14.3. Using a Variational Autoencoder to Generate Icons
- 14.4. Using Data Augmentation to Improve the Autoencoders Performance
- 14.5. Building a Generative Adversarial Network
- 14.6. Training Generative Adversarial Networks
- 14.7. Showing the Icons the GAN Produces
- 14.8. Encoding Icons as Drawing Instructions
- 14.9. Training an RNN to Draw Icons
- 14.10. Generating Icons Using an RNN
- 15. Music and Deep Learning
- 15.1. Creating a Training Set for Music Classification
- 15.2. Training a Music Genre Detector
- 15.3. Visualizing Confusion
- 15.4. Indexing Existing Music
- 15.5. Setting Up Spotify API Access
- 15.6. Collecting Playlists and Songs from Spotify
- 15.7. Training a Music Recommender
- 15.8. Recommending Songs Using a Word2vec Model
- 16. Productionizing Machine Learning Systems
- 16.1. Using Scikit-Learns Nearest Neighbors for Embeddings
- 16.2. Use Postgres to Store Embeddings
- 16.3. Populating and Querying Embeddings Stored in Postgres
- 16.4. Storing High-Dimensional Models in Postgres
- 16.5. Writing Microservices in Python
- 16.6. Deploying a Keras Model Using a Microservice
- 16.7. Calling a Microservice from a Web Framework
- 16.8. TensorFlow seq2seq models
- 16.9. Running Deep Learning Models in the Browser
- 16.10. Running a Keras Model Using TensorFlow Serving
- 16.11. Using a Keras Model from iOS
- Index