reklama - zainteresowany?

Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists - Helion

Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists
ebook
Autor: Alice Zheng, Amanda Casari
ISBN: 978-14-919-5319-8
stron: 218, Format: ebook
Data wydania: 2018-03-23
Księgarnia: Helion

Cena książki: 194,65 zł (poprzednio: 226,34 zł)
Oszczędzasz: 14% (-31,69 zł)

Dodaj do koszyka Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists

Tagi: Analiza danych

Feature engineering is a crucial step in the machine-learning pipeline, yet this topic is rarely examined on its own. With this practical book, you’ll learn techniques for extracting and transforming features—the numeric representations of raw data—into formats for machine-learning models. Each chapter guides you through a single data problem, such as how to represent text or image data. Together, these examples illustrate the main principles of feature engineering.

Rather than simply teach these principles, authors Alice Zheng and Amanda Casari focus on practical application with exercises throughout the book. The closing chapter brings everything together by tackling a real-world, structured dataset with several feature-engineering techniques. Python packages including numpy, Pandas, Scikit-learn, and Matplotlib are used in code examples.

You’ll examine:

  • Feature engineering for numeric data: filtering, binning, scaling, log transforms, and power transforms
  • Natural text techniques: bag-of-words, n-grams, and phrase detection
  • Frequency-based filtering and feature scaling for eliminating uninformative features
  • Encoding techniques of categorical variables, including feature hashing and bin-counting
  • Model-based feature engineering with principal component analysis
  • The concept of model stacking, using k-means as a featurization technique
  • Image feature extraction with manual and deep-learning techniques

Dodaj do koszyka Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists

 

Osoby które kupowały "Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists", wybierały także:

  • Data Science w Pythonie. Kurs video. Algorytmy uczenia maszynowego
  • Power BI Desktop. Kurs video. Wykorzystanie narzÄ™dzia w analizie i wizualizacji danych
  • Statystyka. Kurs video. Przewodnik dla student
  • Microsoft Excel. Kurs video. Wykresy i wizualizacja danych
  • Analiza danych w Tableau. Kurs video. Podstawy pracy analityka

Dodaj do koszyka Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists

Spis treści

Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists eBook -- spis treści

  • Preface
    • Introduction
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Safari
    • How to Contact Us
    • Acknowledgments
      • Special Thanks from Alice
      • Special Thanks from Amanda
  • 1. The Machine Learning Pipeline
    • Data
    • Tasks
    • Models
    • Features
    • Model Evaluation
  • 2. Fancy Tricks with Simple Numbers
    • Scalars, Vectors, and Spaces
    • Dealing with Counts
      • Binarization
      • Quantization or Binning
        • Fixed-width binning
        • Quantile binning
    • Log Transformation
      • Log Transform in Action
      • Power Transforms: Generalization of the Log Transform
    • Feature Scaling or Normalization
      • Min-Max Scaling
      • Standardization (Variance Scaling)
      • 2 Normalization
    • Interaction Features
    • Feature Selection
    • Summary
    • Bibliography
  • 3. Text Data: Flattening, Filtering, and Chunking
    • Bag-of-X: Turning Natural Text into Flat Vectors
      • Bag-of-Words
      • Bag-of-n-Grams
    • Filtering for Cleaner Features
      • Stopwords
      • Frequency-Based Filtering
        • Frequent words
        • Rare words
      • Stemming
    • Atoms of Meaning: From Words to n-Grams to Phrases
      • Parsing and Tokenization
      • Collocation Extraction for Phrase Detection
        • Frequency-based methods
        • Hypothesis testing for collocation extraction
        • Chunking and part-of-speech tagging
    • Summary
    • Bibliography
  • 4. The Effects of Feature Scaling: From Bag-of-Words to Tf-Idf
    • Tf-Idf : A Simple Twist on Bag-of-Words
    • Putting It to the Test
      • Creating a Classification Dataset
      • Scaling Bag-of-Words with Tf-Idf Transformation
      • Classification with Logistic Regression
      • Tuning Logistic Regression with Regularization
    • Deep Dive: What Is Happening?
    • Summary
    • Bibliography
  • 5. Categorical Variables: Counting Eggs in the Age of Robotic Chickens
    • Encoding Categorical Variables
      • One-Hot Encoding
      • Dummy Coding
      • Effect Coding
      • Pros and Cons of Categorical Variable Encodings
    • Dealing with Large Categorical Variables
      • Feature Hashing
      • Bin Counting
        • What about rare categories?
        • Guarding against data leakage
        • Counts without bounds
    • Summary
    • Bibliography
  • 6. Dimensionality Reduction: Squashing the Data Pancake with PCA
    • Intuition
    • Derivation
      • Linear Projection
      • Variance and Empirical Variance
      • Principal Components: First Formulation
      • Principal Components: Matrix-Vector Formulation
      • General Solution of the Principal Components
      • Transforming Features
      • Implementing PCA
    • PCA in Action
    • Whitening and ZCA
    • Considerations and Limitations of PCA
    • Use Cases
    • Summary
    • Bibliography
  • 7. Nonlinear Featurization via K-Means Model Stacking
    • k-Means Clustering
    • Clustering as Surface Tiling
    • k-Means Featurization for Classification
      • Alternative Dense Featurization
    • Pros, Cons, and Gotchas
    • Summary
    • Bibliography
  • 8. Automating the Featurizer: Image Feature Extraction and Deep Learning
    • The Simplest Image Features (and Why They Dont Work)
    • Manual Feature Extraction: SIFT and HOG
      • Image Gradients
      • Gradient Orientation Histograms
        • How many bins should there be? Should they span from 0°360° (signed gradients) or 0°180° (unsigned gradients)?
        • What weight functions should be used?
        • How are neighborhoods defined? How should they cover the image?
        • What kind of normalization should be done?
      • SIFT Architecture
    • Learning Image Features with Deep Neural Networks
      • Fully Connected Layers
      • Convolutional Layers
      • Rectified Linear Unit (ReLU) Transformation
      • Response Normalization Layers
      • Pooling Layers
      • Structure of AlexNet
    • Summary
    • Bibliography
  • 9. Back to the Feature: Building an Academic Paper Recommender
    • Item-Based Collaborative Filtering
    • First Pass: Data Import, Cleaning, and Feature Parsing
      • Academic Paper Recommender: Naive Approach
    • Second Pass: More Engineering and a Smarter Model
      • Academic Paper Recommender: Take 2
    • Third Pass: More Features = More Information
      • Academic Paper Recommender: Take 3
    • Summary
    • Bibliography
  • A. Linear Modeling and Linear Algebra Basics
    • Overview of Linear Classification
    • The Anatomy of a Matrix
      • From Vectors to Subspaces
      • Singular Value Decomposition (SVD)
      • The Four Fundamental Subspaces of the Data Matrix
        • Column space
        • Row space
        • Null space
        • Left null space
    • Solving a Linear System
    • Bibliography
  • Index

Dodaj do koszyka Feature Engineering for Machine Learning. Principles and Techniques for Data Scientists

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.