reklama - zainteresowany?

High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark - Helion

High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark
ebook
Autor: Holden Karau, Rachel Warren
ISBN: 978-14-919-4315-1
stron: 358, Format: ebook
Data wydania: 2017-05-25
Księgarnia: Helion

Cena książki: 143,65 zł (poprzednio: 167,03 zł)
Oszczędzasz: 14% (-23,38 zł)

Dodaj do koszyka High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark

Apache Spark is amazing when everything clicks. But if you haven’t seen the performance improvements you expected, or still don’t feel confident enough to use Spark in production, this practical book is for you. Authors Holden Karau and Rachel Warren demonstrate performance optimizations to help your Spark queries run faster and handle larger data sizes, while using fewer resources.

Ideal for software engineers, data engineers, developers, and system administrators working with large-scale data applications, this book describes techniques that can reduce data infrastructure costs and developer hours. Not only will you gain a more comprehensive understanding of Spark, you’ll also learn how to make it sing.

With this book, you’ll explore:

  • How Spark SQL’s new interfaces improve performance over SQL’s RDD data structure
  • The choice between data joins in Core Spark and Spark SQL
  • Techniques for getting the most out of standard RDD transformations
  • How to work around performance issues in Spark’s key/value pair paradigm
  • Writing high-performance Spark code without Scala or the JVM
  • How to test for functionality and performance when applying suggested improvements
  • Using Spark MLlib and Spark ML machine learning libraries
  • Spark’s Streaming components and external community packages

Dodaj do koszyka High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark

 

Osoby które kupowały "High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark

Spis treści

High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark eBook -- spis treści

  • Preface
    • First Edition Notes
    • Supporting Books and Materials
    • Conventions Used in This Book
    • Using Code Examples
    • OReilly Safari
    • How to Contact the Authors
    • How to Contact Us
    • Acknowledgments
  • 1. Introduction to High Performance Spark
    • What Is Spark and Why Performance Matters
    • What You Can Expect to Get from This Book
    • Spark Versions
    • Why Scala?
      • To Be a Spark Expert You Have to Learn a Little Scala Anyway
      • The Spark Scala API Is Easier to Use Than the Java API
      • Scala Is More Performant Than Python
      • Why Not Scala?
      • Learning Scala
    • Conclusion
  • 2. How Spark Works
    • How Spark Fits into the Big Data Ecosystem
      • Spark Components
    • Spark Model of Parallel Computing: RDDs
      • Lazy Evaluation
        • Performance and usability advantages of lazy evaluation
        • Lazy evaluation and fault tolerance
        • Lazy evaluation and debugging
      • In-Memory Persistence and Memory Management
      • Immutability and the RDD Interface
      • Types of RDDs
      • Functions on RDDs: Transformations Versus Actions
      • Wide Versus Narrow Dependencies
    • Spark Job Scheduling
      • Resource Allocation Across Applications
      • The Spark Application
        • Default Spark Scheduler
    • The Anatomy of a Spark Job
      • The DAG
      • Jobs
      • Stages
      • Tasks
    • Conclusion
  • 3. DataFrames, Datasets, and Spark SQL
    • Getting Started with the SparkSession (or HiveContext or SQLContext)
    • Spark SQL Dependencies
      • Managing Spark Dependencies
      • Avoiding Hive JARs
    • Basics of Schemas
    • DataFrame API
      • Transformations
        • Simple DataFrame transformations and SQL expressions
        • Specialized DataFrame transformations for missing and noisy data
        • Beyond row-by-row transformations
        • Aggregates and groupBy
        • Windowing
        • Sorting
      • Multi-DataFrame Transformations
        • Set-like operations
      • Plain Old SQL Queries and Interacting with Hive Data
    • Data Representation in DataFrames and Datasets
      • Tungsten
    • Data Loading and Saving Functions
      • DataFrameWriter and DataFrameReader
      • Formats
        • JSON
        • JDBC
        • Parquet
        • Hive tables
        • RDDs
        • Local collections
        • Additional formats
      • Save Modes
      • Partitions (Discovery and Writing)
    • Datasets
      • Interoperability with RDDs, DataFrames, and Local Collections
      • Compile-Time Strong Typing
      • Easier Functional (RDD like) Transformations
      • Relational Transformations
      • Multi-Dataset Relational Transformations
      • Grouped Operations on Datasets
    • Extending with User-Defined Functions and Aggregate Functions (UDFs, UDAFs)
    • Query Optimizer
      • Logical and Physical Plans
      • Code Generation
      • Large Query Plans and Iterative Algorithms
    • Debugging Spark SQL Queries
    • JDBC/ODBC Server
    • Conclusion
  • 4. Joins (SQL and Core)
    • Core Spark Joins
      • Choosing a Join Type
      • Choosing an Execution Plan
        • Speeding up joins by assigning a known partitioner
        • Speeding up joins using a broadcast hash join
        • Partial manual broadcast hash join
    • Spark SQL Joins
      • DataFrame Joins
        • Self joins
        • Broadcast hash joins
      • Dataset Joins
    • Conclusion
  • 5. Effective Transformations
    • Narrow Versus Wide Transformations
      • Implications for Performance
      • Implications for Fault Tolerance
      • The Special Case of coalesce
    • What Type of RDD Does Your Transformation Return?
    • Minimizing Object Creation
      • Reusing Existing Objects
      • Using Smaller Data Structures
    • Iterator-to-Iterator Transformations with mapPartitions
      • What Is an Iterator-to-Iterator Transformation?
      • Space and Time Advantages
      • An Example
    • Set Operations
    • Reducing Setup Overhead
      • Shared Variables
      • Broadcast Variables
      • Accumulators
    • Reusing RDDs
      • Cases for Reuse
        • Iterative computations
        • Multiple actions on the same RDD
        • If the cost to compute each partition is very high
      • Deciding if Recompute Is Inexpensive Enough
      • Types of Reuse: Cache, Persist, Checkpoint, Shuffle Files
        • Persist and cache
        • Checkpointing
        • Checkpointing example
      • Alluxio (nee Tachyon)
      • LRU Caching
        • Shuffle files
      • Noisy Cluster Considerations
      • Interaction with Accumulators
    • Conclusion
  • 6. Working with Key/Value Data
    • The Goldilocks Example
      • Goldilocks Version 0: Iterative Solution
      • How to Use PairRDDFunctions and OrderedRDDFunctions
    • Actions on Key/Value Pairs
    • Whats So Dangerous About the groupByKey Function
      • Goldilocks Version 1: groupByKey Solution
        • Why GroupByKey fails
    • Choosing an Aggregation Operation
      • Dictionary of Aggregation Operations with Performance Considerations
        • Preventing out-of-memory errors with aggregation operations
    • Multiple RDD Operations
      • Co-Grouping
    • Partitioners and Key/Value Data
      • Using the Spark Partitioner Object
      • Hash Partitioning
      • Range Partitioning
      • Custom Partitioning
      • Preserving Partitioning Information Across Transformations
        • Using narrow transformations that preserve partitioning
      • Leveraging Co-Located and Co-Partitioned RDDs
      • Dictionary of Mapping and Partitioning Functions PairRDDFunctions
    • Dictionary of OrderedRDDOperations
      • Sorting by Two Keys with SortByKey
    • Secondary Sort and repartitionAndSortWithinPartitions
      • Leveraging repartitionAndSortWithinPartitions for a Group by Key and Sort Values Function
      • How Not to Sort by Two Orderings
      • Goldilocks Version 2: Secondary Sort
        • Defining the custom partitioner
        • Filtering on each partition
        • Combine the elements associated with one key
        • Performance
      • A Different Approach to Goldilocks
        • Map to (cell value, column index) pairs
        • Sort and count values on each partition
        • Determine location of rank statistics on each partition
        • Filter for rank statistics
      • Goldilocks Version 3: Sort on Cell Values
    • Straggler Detection and Unbalanced Data
      • Back to Goldilocks (Again)
      • Goldilocks Version 4: Reduce to Distinct on Each Partition
        • Aggregate to ((cell value, column index), count) on each partition
        • Sort and find rank statistics
        • Goldilocks postmortem
    • Conclusion
  • 7. Going Beyond Scala
    • Beyond Scala within the JVM
    • Beyond Scala, and Beyond the JVM
      • How PySpark Works
        • PySpark RDDs
        • PySpark DataFrames and Datasets
        • Accessing the backing Java objects and mixing Scala code
        • PySpark dependency management
        • Installing PySpark
      • How SparkR Works
      • Spark.jl (Julia Spark)
      • How Eclair JS Works
      • Spark on the Common Language Runtime (CLR)C# and Friends
    • Calling Other Languages from Spark
      • Using Pipe and Friends
      • JNI
      • Java Native Access (JNA)
      • Underneath Everything Is FORTRAN
      • Getting to the GPU
    • The Future
    • Conclusion
  • 8. Testing and Validation
    • Unit Testing
      • General Spark Unit Testing
        • Factoring your code for testability
        • Regular Spark jobs (testing with RDDs)
        • Streaming
      • Mocking RDDs
        • Testing DataFrames
    • Getting Test Data
      • Generating Large Datasets
      • Sampling
    • Property Checking with ScalaCheck
      • Computing RDD Difference
    • Integration Testing
      • Choosing Your Integration Testing Environment
        • Local mode
        • Docker-based
        • Yarn MiniCluster
    • Verifying Performance
      • Spark Counters for Verifying Performance
      • Projects for Verifying Performance
    • Job Validation
    • Conclusion
  • 9. Spark MLlib and ML
    • Choosing Between Spark MLlib and Spark ML
    • Working with MLlib
      • Getting Started with MLlib (Organization and Imports)
      • MLlib Feature Encoding and Data Preparation
        • Working with Spark vectors
        • Preparing textual data
        • Preparing data for supervised learning
      • Feature Scaling and Selection
      • MLlib Model Training
      • Predicting
      • Serving and Persistence
        • Saveable (internal format)
        • PMML
        • Custom
      • Model Evaluation
    • Working with Spark ML
      • Spark ML Organization and Imports
      • Pipeline Stages
      • Explain Params
      • Data Encoding
      • Data Cleaning
      • Spark ML Models
      • Putting It All Together in a Pipeline
      • Training a Pipeline
      • Accessing Individual Stages
      • Data Persistence and Spark ML
        • Automated model selection (parameter search)
      • Extending Spark ML Pipelines with Your Own Algorithms
        • Custom transformers
        • Custom estimators
      • Model and Pipeline Persistence and Serving with Spark ML
    • General Serving Considerations
    • Conclusion
  • 10. Spark Components and Packages
    • Stream Processing with Spark
      • Sources and Sinks
        • Receivers
        • Repartitioning
      • Batch Intervals
      • Data Checkpoint Intervals
      • Considerations for DStreams
        • Output operations
      • Considerations for Structured Streaming
        • Data sources
        • Output operations
        • Custom sinks
        • Machine learning with Structured Streaming
        • Stream status and debugging
      • High Availability Mode (or Handling Driver Failure or Checkpointing)
    • GraphX
    • Using Community Packages and Libraries
      • Creating a Spark Package
    • Conclusion
  • A. Tuning, Debugging, and Other Things Developers Like to Pretend Dont Exist
    • Spark Tuning and Cluster Sizing
      • How to Adjust Spark Settings
      • How to Determine the Relevant Information About Your Cluster
    • Basic Spark Core Settings: How Many Resources to Allocate to the Spark Application?
      • Calculating Executor and Driver Memory Overhead
      • How Large to Make the Spark Driver
      • A Few Large Executors or Many Small Executors?
        • Many small executors
        • Many large executors
      • Allocating Cluster Resources and Dynamic Allocation
        • Restrictions on dynamic allocation
      • Dividing the Space Within One Executor
      • Number and Size of Partitions
    • Serialization Options
      • Kryo
        • Spark settings conclusion
    • Some Additional Debugging Techniques
      • Out of Disk Space Errors
      • Logging
      • Configuring logging
      • Accessing logs
      • Attaching debuggers
      • Debugging in notebooks
      • Python debugging
      • Debugging conclusion
  • Index

Dodaj do koszyka High Performance Spark. Best Practices for Scaling and Optimizing Apache Spark

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.