Programming Collective Intelligence. Building Smart Web 2.0 Applications - Helion
ISBN: 978-05-965-5068-4
stron: 362, Format: ebook
Data wydania: 2007-08-16
Księgarnia: Helion
Cena książki: 118,15 zł (poprzednio: 137,38 zł)
Oszczędzasz: 14% (-19,23 zł)
Want to tap the power behind search rankings, product recommendations, social bookmarking, and online matchmaking? This fascinating book demonstrates how you can build Web 2.0 applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.
Programming Collective Intelligence takes you into the world of machine learning and statistics, and explains how to draw conclusions about user experience, marketing, personal tastes, and human behavior in general -- all from information that you and others collect every day. Each algorithm is described clearly and concisely with code that can immediately be used on your web site, blog, Wiki, or specialized application. This book explains:
- Collaborative filtering techniques that enable online retailers to recommend products or media
- Methods of clustering to detect groups of similar items in a large dataset
- Search engine features -- crawlers, indexers, query engines, and the PageRank algorithm
- Optimization algorithms that search millions of possible solutions to a problem and choose the best one
- Bayesian filtering, used in spam filters for classifying documents based on word types and other features
- Using decision trees not only to make predictions, but to model the way decisions are made
- Predicting numerical values rather than classifications to build price models
- Support vector machines to match people in online dating sites
- Non-negative matrix factorization to find the independent features in a dataset
- Evolving intelligence for problem solving -- how a computer develops its skill by improving its own code the more it plays a game
"Bravo! I cannot think of a better way for a developer to first learn these algorithms and methods, nor can I think of a better way for me (an old AI dog) to reinvigorate my knowledge of the details."
-- Dan Russell, Google
"Toby's book does a great job of breaking down the complex subject matter of machine-learning algorithms into practical, easy-to-understand examples that can be directly applied to analysis of social interaction across the Web today. If I had this book two years ago, it would have saved precious time going down some fruitless paths."
-- Tim Wolters, CTO, Collective Intellect
Osoby które kupowały "Programming Collective Intelligence. Building Smart Web 2.0 Applications", wybierały także:
- Zosta 149,00 zł, (44,70 zł -70%)
- Metoda dziel i zwyci 89,00 zł, (26,70 zł -70%)
- Matematyka. Kurs video. Teoria dla programisty i data science 399,00 zł, (119,70 zł -70%)
- Design Thinking. Kurs video. My 129,00 zł, (38,70 zł -70%)
- Konwolucyjne sieci neuronowe. Kurs video. Tensorflow i Keras w rozpoznawaniu obraz 149,00 zł, (44,70 zł -70%)
Spis treści
Programming Collective Intelligence. Building Smart Web 2.0 Applications eBook -- spis treści
- Programming Collective Intelligence
- A Note Regarding Supplemental Files
- Praise for Programming Collective Intelligence
- Preface
- Prerequisites
- Style of Examples
- Why Python?
- Python Tips
- List and dictionary constructors
- Significant Whitespace
- List comprehensions
- Python Tips
- Open APIs
- Overview of the Chapters
- Conventions
- Using Code Examples
- How to Contact Us
- Safari Books Online
- Acknowledgments
- 1. Introduction to Collective Intelligence
- What Is Collective Intelligence?
- What Is Machine Learning?
- Limits of Machine Learning
- Real-Life Examples
- Other Uses for Learning Algorithms
- 2. Making Recommendations
- Collaborative Filtering
- Collecting Preferences
- Finding Similar Users
- Euclidean Distance Score
- Pearson Correlation Score
- Which Similarity Metric Should You Use?
- Ranking the Critics
- Recommending Items
- Matching Products
- Building a del.icio.us Link Recommender
- The del.icio.us API
- Building the Dataset
- Recommending Neighbors and Links
- Item-Based Filtering
- Building the Item Comparison Dataset
- Getting Recommendations
- Using the MovieLens Dataset
- User-Based or Item-Based Filtering?
- Exercises
- 3. Discovering Groups
- Supervised versus Unsupervised Learning
- Word Vectors
- Pigeonholing the Bloggers
- Counting the Words in a Feed
- Hierarchical Clustering
- Drawing the Dendrogram
- Column Clustering
- K-Means Clustering
- Clusters of Preferences
- Getting and Preparing the Data
- Beautiful Soup
- Scraping the Zebo Results
- Defining a Distance Metric
- Clustering Results
- Viewing Data in Two Dimensions
- Other Things to Cluster
- Exercises
- 4. Searching and Ranking
- Whats in a Search Engine?
- A Simple Crawler
- Using urllib2
- Crawler Code
- Building the Index
- Setting Up the Schema
- Finding the Words on a Page
- Adding to the Index
- Querying
- Content-Based Ranking
- Normalization Function
- Word Frequency
- Document Location
- Word Distance
- Using Inbound Links
- Simple Count
- The PageRank Algorithm
- Using the Link Text
- Learning from Clicks
- Design of a Click-Tracking Network
- Setting Up the Database
- Feeding Forward
- Training with Backpropagation
- Training Test
- Connecting to the Search Engine
- Exercises
- 5. Optimization
- Group Travel
- Representing Solutions
- The Cost Function
- Random Searching
- Hill Climbing
- Simulated Annealing
- Genetic Algorithms
- Real Flight Searches
- The Kayak API
- The minidom Package
- Flight Searches
- Optimizing for Preferences
- Student Dorm Optimization
- The Cost Function
- Running the Optimization
- Network Visualization
- The Layout Problem
- Counting Crossed Lines
- Drawing the Network
- Other Possibilities
- Exercises
- 6. Document Filtering
- Filtering Spam
- Documents and Words
- Training the Classifier
- Calculating Probabilities
- Starting with a Reasonable Guess
- A Nave Classifier
- Probability of a Whole Document
- A Quick Introduction to Bayes Theorem
- Choosing a Category
- The Fisher Method
- Category Probabilities for Features
- Combining the Probabilities
- Classifying Items
- Persisting the Trained Classifiers
- Using SQLite
- Filtering Blog Feeds
- Improving Feature Detection
- Using Akismet
- Alternative Methods
- Exercises
- 7. Modeling with Decision Trees
- Predicting Signups
- Introducing Decision Trees
- Training the Tree
- Choosing the Best Split
- Gini Impurity
- Entropy
- Recursive Tree Building
- Displaying the Tree
- Graphical Display
- Classifying New Observations
- Pruning the Tree
- Dealing with Missing Data
- Dealing with Numerical Outcomes
- Modeling Home Prices
- The Zillow API
- Modeling Hotness
- When to Use Decision Trees
- Exercises
- 8. Building Price Models
- Building a Sample Dataset
- k-Nearest Neighbors
- Number of Neighbors
- Defining Similarity
- Code for k-Nearest Neighbors
- Weighted Neighbors
- Inverse Function
- Subtraction Function
- Gaussian Function
- Weighted kNN
- Cross-Validation
- Heterogeneous Variables
- Adding to the Dataset
- Scaling Dimensions
- Optimizing the Scale
- Uneven Distributions
- Estimating the Probability Density
- Graphing the Probabilities
- Using Real Datathe eBay API
- Getting a Developer Key
- Setting Up a Connection
- Performing a Search
- Getting Details for an Item
- Building a Price Predictor
- When to Use k-Nearest Neighbors
- Exercises
- 9. Advanced Classification: Kernel Methods and SVMs
- Matchmaker Dataset
- Difficulties with the Data
- Decision Tree Classifier
- Basic Linear Classification
- Categorical Features
- Yes/No Questions
- Lists of Interests
- Determining Distances Using Yahoo! Maps
- Getting a Yahoo! Application Key
- Using the Geocoding API
- Calculating the Distance
- Creating the New Dataset
- Scaling the Data
- Understanding Kernel Methods
- The Kernel Trick
- Support-Vector Machines
- Using LIBSVM
- Getting LIBSVM
- A Sample Session
- Applying SVM to the Matchmaker Dataset
- Matching on Facebook
- Getting a Developer Key
- Creating a Session
- Download Friend Data
- Building a Match Dataset
- Creating an SVM Model
- Exercises
- 10. Finding Independent Features
- A Corpus of News
- Selecting Sources
- Downloading Sources
- Converting to a Matrix
- Previous Approaches
- Bayesian Classification
- Clustering
- Non-Negative Matrix Factorization
- A Quick Introduction to Matrix Math
- What Does This Have to Do with the Articles Matrix?
- Using NumPy
- The Algorithm
- Displaying the Results
- Displaying by Article
- Using Stock Market Data
- What Is Trading Volume?
- Downloading Data from Yahoo! Finance
- Preparing a Matrix
- Running NMF
- Displaying the Results
- Exercises
- A Corpus of News
- 11. EVOLVING INTELLIGENCE
- What Is Genetic Programming?
- Genetic Programming Versus Genetic Algorithms
- Programs As Trees
- Representing Trees in Python
- Building and Evaluating Trees
- Displaying the Program
- Creating the Initial Population
- Testing a Solution
- A Simple Mathematical Test
- Measuring Success
- Mutating Programs
- Crossover
- Building the Environment
- The Importance of Diversity
- A Simple Game
- A Round-Robin Tournament
- Playing Against Real People
- Further Possibilities
- More Numerical Functions
- Memory
- Different Datatypes
- Exercises
- What Is Genetic Programming?
- 12. Algorithm Summary
- Bayesian Classifier
- Training
- Classifying
- Using Your Code
- Strengths and Weaknesses
- Decision Tree Classifier
- Training
- Using Your Decision Tree Classifier
- Strengths and Weaknesses
- Neural Networks
- Training a Neural Network
- Using Your Neural Network Code
- Strengths and Weaknesses
- Support-Vector Machines
- The Kernel Trick
- Using LIBSVM
- Strengths and Weaknesses
- k-Nearest Neighbors
- Scaling and Superfluous Variables
- Using Your kNN Code
- Strengths and Weaknesses
- Clustering
- Hierarchical Clustering
- K-Means Clustering
- Using Your Clustering Code
- Multidimensional Scaling
- Using Your Multidimensional Scaling Code
- Non-Negative Matrix Factorization
- Using Your NMF Code
- Optimization
- The Cost Function
- Simulated Annealing
- Genetic Algorithms
- Using Your Optimization Code
- Bayesian Classifier
- A. Third-Party Libraries
- Universal Feed Parser
- Installation for All Platforms
- Python Imaging Library
- Installation on Windows
- Installation on Other Platforms
- Simple Usage Example
- Beautiful Soup
- Installation on All Platforms
- Simple Usage Example
- pysqlite
- Installation on Windows
- Installation on Other Platforms
- Simple Usage Example
- NumPy
- Installation on Windows
- Installation on Other Platforms
- Simple Usage Example
- matplotlib
- Installation
- Simple Usage Example
- pydelicious
- Installation for All Platforms
- Simple Usage Example
- Universal Feed Parser
- B. Mathematical Formulas
- Euclidean Distance
- Pearson Correlation Coefficient
- Weighted Mean
- Tanimoto Coefficient
- Conditional Probability
- Gini Impurity
- Entropy
- Variance
- Gaussian Function
- Dot-Products
- Index
- About the Author
- Colophon
- Copyright