reklama - zainteresowany?

TensorFlow 2 Pocket Reference - Helion

TensorFlow 2 Pocket Reference
ebook
Autor: KC Tung
ISBN: 9781492089131
stron: 256, Format: ebook
Data wydania: 2021-07-19
Księgarnia: Helion

Cena książki: 80,74 zł (poprzednio: 94,99 zł)
Oszczędzasz: 15% (-14,25 zł)

Dodaj do koszyka TensorFlow 2 Pocket Reference

This easy-to-use reference for TensorFlow 2 design patterns in Python will help you make informed decisions for various use cases. Author KC Tung addresses common topics and tasks in enterprise data science and machine learning practices rather than focusing on TensorFlow itself.

When and why would you feed training data as using NumPy or a streaming dataset? How would you set up cross-validations in the training process? How do you leverage a pretrained model using transfer learning? How do you perform hyperparameter tuning? Pick up this pocket reference and reduce the time you spend searching through options for your TensorFlow use cases.

  • Understand best practices in TensorFlow model patterns and ML workflows
  • Use code snippets as templates in building TensorFlow models and workflows
  • Save development time by integrating prebuilt models in TensorFlow Hub
  • Make informed design choices about data ingestion, training paradigms, model saving, and inferencing
  • Address common scenarios such as model design style, data ingestion workflow, model training, and tuning

Dodaj do koszyka TensorFlow 2 Pocket Reference

 

Osoby które kupowały "TensorFlow 2 Pocket Reference", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka TensorFlow 2 Pocket Reference

Spis treści

TensorFlow 2 Pocket Reference eBook -- spis treści

  • Preface
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Online Learning
    • How to Contact Us
    • Acknowledgments
  • 1. Introduction to TensorFlow 2
    • Improvements in TensorFlow 2
      • Keras API
      • Reusable Models in TensorFlow
    • Making Commonly Used Operations Easy
      • Open Source Data
      • Working with Distributed Datasets
      • Data Streaming
      • Data Engineering
      • Transfer Learning
      • Model Styles
      • Monitoring the Training Process
      • Distributed Training
      • Serving Your TensorFlow Model
      • Improving the Training Experience
    • Wrapping Up
  • 2. Data Storage and Ingestion
    • Streaming Data with Python Generators
    • Streaming File Content with a Generator
    • JSON Data Structures
    • Setting Up a Pattern for Filenames
    • Splitting a Single CSV File into Multiple CSV Files
    • Creating a File Pattern Object Using tf.io
    • Creating a Streaming Dataset Object
    • Streaming a CSV Dataset
    • Organizing Image Data
    • Using TensorFlow Image Generator
    • Streaming Cross-Validation Images
    • Inspecting Resized Images
    • Wrapping Up
  • 3. Data Preprocessing
    • Preparing Tabular Data for Training
      • Marking Columns
      • Encoding Column Interactions as Possible Features
      • Creating a Cross-Validation Dataset
      • Starting the Model Training Process
      • Summary
    • Preparing Image Data for Processing
      • Transforming Images to a Fixed Specification
      • Training the Model
      • Summary
    • Preparing Text Data for Processing
      • Tokenizing Text
      • Creating a Dictionary and Reverse Dictionary
    • Wrapping Up
  • 4. Reusable Model Elements
    • The Basic TensorFlow Hub Workflow
    • Image Classification by Transfer Learning
      • Model Requirements
      • Data Transformation and Input Processing
      • Model Implementation with TensorFlow Hub
      • Defining the Output
      • Mapping Output to Plain-Text Format
      • Evaluation: Creating a Confusion Matrix
      • Summary
    • Using the tf.keras.applications Module for Pretrained Models
      • Model Implementation with tf.keras.applications
      • Fine-Tuning Models from tf.keras.applications
    • Wrapping Up
  • 5. Data Pipelines for Streaming Ingestion
    • Streaming Text Files with the text_dataset_from_directory Function
      • Downloading Text Data and Setting Up Directories
      • Creating the Data Pipeline
      • Inspecting the Dataset
      • Summary
    • Streaming Images with a File List Using the flow_from_dataframe Method
      • Downloading Images and Setting Up Directories
      • Creating the Data Ingestion Pipeline
      • Inspecting the Dataset
      • Building and Training the tf.keras Model
    • Streaming a NumPy Array with the from_tensor_slices Method
      • Loading Example Data and Libraries
      • Inspecting the NumPy Array
      • Building the Input Pipeline for NumPy Data
    • Wrapping Up
  • 6. Model Creation Styles
    • Using the Symbolic API
      • Loading the CIFAR-10 Images
      • Inspecting Label Distribution
      • Inspecting Images
      • Building a Data Pipeline
      • Batching the Dataset for Training
      • Building the Model
    • Understanding Inheritance
    • Using the Imperative API
      • Defining a Model as a Class
    • Choosing the API
    • Using the Built-In Training Loop
    • Creating and Using a Custom Training Loop
      • Creating the Elements of the Loop
      • Putting the Elements Together in a Custom Training Loop
    • Wrapping Up
  • 7. Monitoring the Training Process
    • Callback Objects
      • ModelCheckpoint
      • EarlyStopping
      • Summary
    • TensorBoard
      • Invoking TensorBoard by Local Jupyter Notebook
      • Invoking TensorBoard by Local Command Terminal
      • Invoking TensorBoard by Colab Notebook
      • Visualizing Model Overfitting Using TensorBoard
      • Visualizing the Learning Process Using TensorBoard
    • Wrapping Up
  • 8. Distributed Training
    • Data Parallelism
      • Asynchronous Parameter Server
      • Synchronous Allreduce
    • Using the Class tf.distribute.MirroredStrategy
      • Setting Up Distributed Training
      • Using a GPU Cluster with tf.distribute.MirroredStrategy
      • Summary
    • The Horovod API
      • Code Pattern for Implementing the Horovod API
      • Encapsulating the Model Architecture
      • Encapsulating the Data Separation and Sharding Processes
      • Parameter Synchronization Among Workers
      • Model Checkpoint as a Callback
      • Distributed Optimizer for Gradient Aggregation
      • Distributed Training Using the Horovod API
    • Wrapping Up
  • 9. Serving TensorFlow Models
    • Model Serialization
      • Saving a Model to h5 Format
      • Saving a Model to pb Format
      • Selecting the Model Format
    • TensorFlow Serving
      • Running TensorFlow Serving with a Docker Image
        • Scoring Test Data with TensorFlow Serving
    • Wrapping Up
  • 10. Improving the Modeling Experience: Fairness Evaluation and Hyperparameter Tuning
    • Model Fairness
      • Model Training and Scoring
      • Fairness Evaluation
      • Rendering Fairness Indicators
    • Hyperparameter Tuning
      • Integer Lists as Hyperparameters
      • Item Choice as Hyperparameters
      • Floating-Point Values as Hyperparameters
    • End-to-End Hyperparameter Tuning
      • Import Libraries and Load Data
    • Wrapping Up
  • Index

Dodaj do koszyka TensorFlow 2 Pocket Reference

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.