Data Analytics with Hadoop. An Introduction for Data Scientists - Helion
ISBN: 978-14-919-1375-8
stron: 288, Format: ebook
Data wydania: 2016-06-01
Księgarnia: Helion
Cena książki: 80,74 zł (poprzednio: 94,99 zł)
Oszczędzasz: 15% (-14,25 zł)
Ready to use statistical and machine-learning techniques across large data sets? This practical guide shows you why the Hadoop ecosystem is perfect for the job. Instead of deployment, operations, or software development usually associated with distributed computing, you’ll focus on particular analyses you can build, the data warehousing techniques that Hadoop provides, and higher order data workflows this framework can produce.
Data scientists and analysts will learn how to perform a wide range of techniques, from writing MapReduce and Spark applications with Python to using advanced modeling and data management with Spark MLlib, Hive, and HBase. You’ll also learn about the analytical processes and data systems available to build and empower data products that can handle—and actually require—huge amounts of data.
- Understand core concepts behind Hadoop and cluster computing
- Use design patterns and parallel analytical algorithms to create distributed data analysis jobs
- Learn about data management, mining, and warehousing in a distributed context using Apache Hive and HBase
- Use Sqoop and Apache Flume to ingest data from relational databases
- Program complex Hadoop and Spark applications with Apache Pig and Spark DataFrames
- Perform machine learning techniques such as classification, clustering, and collaborative filtering with Spark’s MLlib
Osoby które kupowały "Data Analytics with Hadoop. An Introduction for Data Scientists", wybierały także:
- Analiza biznesowa. Praktyczne modelowanie organizacji 49,00 zł, (24,50 zł -50%)
- Procesy biznesowe w praktyce. Projektowanie, testowanie i optymalizacja. Wydanie II 69,00 zł, (34,50 zł -50%)
- Procesy biznesowe w praktyce. Projektowanie, testowanie i optymalizacja 58,98 zł, (29,49 zł -50%)
- Elasticsearch. Kurs video. Pozyskiwanie i analiza danych 249,00 zł, (136,95 zł -45%)
- Apache Spark. Kurs video. Przetwarzanie z 149,00 zł, (96,85 zł -35%)
Spis treści
Data Analytics with Hadoop. An Introduction for Data Scientists eBook -- spis treści
- Preface
- What to Expect from This Book
- Who This Book Is For
- How to Read This Book
- Overview of Chapters
- Programming and Code Examples
- GitHub Repository
- Executing Distributed Jobs
- Permissions and Citation
- Feedback and How to Contact Us
- Safari Books Online
- How to Contact Us
- Acknowledgments
- I. Introduction to Distributed Computing
- 1. The Age of the Data Product
- What Is a Data Product?
- Building Data Products at Scale with Hadoop
- Leveraging Large Datasets
- Hadoop for Data Products
- The Data Science Pipeline and the Hadoop Ecosystem
- Big Data Workflows
- Conclusion
- 2. An Operating System for Big Data
- Basic Concepts
- Hadoop Architecture
- A Hadoop Cluster
- HDFS
- Blocks
- Data management
- YARN
- Working with a Distributed File System
- Basic File System Operations
- File Permissions in HDFS
- Other HDFS Interfaces
- Working with Distributed Computation
- MapReduce: A Functional Programming Model
- MapReduce: Implemented on a Cluster
- MapReduce examples
- Beyond a Map and Reduce: Job Chaining
- Submitting a MapReduce Job to YARN
- Conclusion
- 3. A Framework for Python and Hadoop Streaming
- Hadoop Streaming
- Computing on CSV Data with Streaming
- Executing Streaming Jobs
- A Framework for MapReduce with Python
- Counting Bigrams
- Other Frameworks
- Advanced MapReduce
- Combiners
- Partitioners
- Job Chaining
- Conclusion
- Hadoop Streaming
- 4. In-Memory Computing with Spark
- Spark Basics
- The Spark Stack
- Resilient Distributed Datasets
- Programming with RDDs
- Interactive Spark Using PySpark
- Writing Spark Applications
- Visualizing Airline Delays with Spark
- Conclusion
- Spark Basics
- 5. Distributed Analysis and Patterns
- Computing with Keys
- Compound Keys
- Compound data serialization
- Keyspace Patterns
- Transforming the keyspace
- The explode mapper
- The filter mapper
- The identity pattern
- Pairs versus Stripes
- Compound Keys
- Design Patterns
- Summarization
- Aggregation
- Statistical summarization
- Indexing
- Inverted index
- TF-IDF
- Filtering
- Top n records
- Simple random sample
- Bloom filtering
- Summarization
- Toward Last-Mile Analytics
- Fitting a Model
- Validating Models
- Conclusion
- Computing with Keys
- II. Workflows and Tools for Big Data Science
- 6. Data Mining and Warehousing
- Structured Data Queries with Hive
- The Hive Command-Line Interface (CLI)
- Hive Query Language (HQL)
- Creating a database
- Creating tables
- Loading data
- Data Analysis with Hive
- Grouping
- Aggregations and joins
- HBase
- NoSQL and Column-Oriented Databases
- Real-Time Analytics with HBase
- Generating a schema
- Namespaces, tables, and column families
- Row keys
- Inserting data with put
- Get row or cell values
- Scan rows
- Filters
- Further reading on HBase
- Conclusion
- Structured Data Queries with Hive
- 7. Data Ingestion
- Importing Relational Data with Sqoop
- Importing from MySQL to HDFS
- Importing from MySQL to Hive
- Importing from MySQL to HBase
- Ingesting Streaming Data with Flume
- Flume Data Flows
- Ingesting Product Impression Data with Flume
- Conclusion
- Importing Relational Data with Sqoop
- 8. Analytics with Higher-Level APIs
- Pig
- Pig Latin
- Relations and tuples
- Filtering
- Projection
- Grouping and joining
- Storing and outputting data
- Data Types
- Relational Operators
- User-Defined Functions
- Wrapping Up
- Pig Latin
- Sparks Higher-Level APIs
- Spark SQL
- DataFrames
- Data wrangling DataFrames
- Conclusion
- Pig
- 9. Machine Learning
- Scalable Machine Learning with Spark
- Collaborative Filtering
- User-based recommender: An example
- Classification
- Logistic regression classification: An example
- Clustering
- k-means clustering: An example
- Collaborative Filtering
- Conclusion
- Scalable Machine Learning with Spark
- 10. Summary: Doing Distributed Data Science
- Data Product Lifecycle
- Data Lakes
- Data Ingestion
- Computational Data Stores
- Relational approaches: Hive
- NoSQL approaches: HBase
- Machine Learning Lifecycle
- Conclusion
- Data Product Lifecycle
- A. Creating a Hadoop Pseudo-Distributed Development Environment
- Quick Start
- Setting Up Linux
- Creating a Hadoop User
- Configuring SSH
- Installing Java
- Disabling IPv6
- Installing Hadoop
- Unpacking
- Environment
- Hadoop Configuration
- Formatting the Namenode
- Starting Hadoop
- Restarting Hadoop
- B. Installing Hadoop Ecosystem Products
- Packaged Hadoop Distributions
- Self-Installation of Apache Hadoop Ecosystem Products
- Basic Installation and Configuration Steps
- Sqoop-Specific Configurations
- Hive-Specific Configuration
- Hive warehouse directory
- Hive metastore database
- Verifying Hive is running
- HBase-Specific Configurations
- Starting HBase
- Installing Spark
- Minimizing the verbosity of Spark
- Glossary
- Index