reklama - zainteresowany?

Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide - Helion

Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide
ebook
Autor: Edward Capriolo, Brian Fitzpatrick
Tytuł oryginału: Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide.
ISBN: 9781849515139
stron: 324, Format: ebook
Data wydania: 2011-07-15
Księgarnia: Helion

Cena książki: 119,00 zł

Dodaj do koszyka Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide

Apache Cassandra is a fault-tolerant, distributed data store which offers linear scalability allowing it to be a storage platform for large high volume websites. This book provides detailed recipes that describe how to use the features of Cassandra and improve its performance. Recipes cover topics ranging from setting up Cassandra for the first time to complex multiple data center installations. The recipe format presents the information in a concise actionable form.The book describes in detail how features of Cassandra can be tuned and what the possible effects of tuning can be. Recipes include how to access data stored in Cassandra and use third party tools to help you out. The book also describes how to monitor and do capacity planning to ensure it is performing at a high level. Towards the end, it takes you through the use of libraries and third party applications with Cassandra and Cassandra integration with Hadoop.

Dodaj do koszyka Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide

 

Osoby które kupowały "Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide", wybierały także:

  • Windows Media Center. Domowe centrum rozrywki
  • Ruby on Rails. Ćwiczenia
  • DevOps w praktyce. Kurs video. Jenkins, Ansible, Terraform i Docker
  • Przywództwo w Å›wiecie VUCA. Jak być skutecznym liderem w niepewnym Å›rodowisku
  • Scrum. O zwinnym zarzÄ…dzaniu projektami. Wydanie II rozszerzone

Dodaj do koszyka Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide

Spis treści

Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide eBook -- spis treści

  • Cassandra High Performance Cookbook
    • Table of Contents
    • Cassandra High Performance Cookbook
    • Credits
    • About the Author
    • About the Reviewers
    • www.PacktPub.com
      • Support files, eBooks, discount offers and more
        • Why Subscribe?
        • Free Access for Packt account holders
    • Preface
      • What this book covers
      • What you need for this book
      • Who this book is for
      • Conventions
      • Reader feedback
      • Customer support
        • Downloading the example code for this book
        • Errata
        • Piracy
        • Questions
    • 1. Getting Started
      • Introduction
      • A simple single node Cassandra installation
        • Getting ready
        • How to do it...
        • How it works...
        • Theres more...
        • See also...
      • Reading and writing test data using the command-line interface
        • How to do it...
        • How it works...
        • See also...
      • Running multiple instances on a single machine
        • How to do it...
        • How it works...
        • See also...
      • Scripting a multiple instance installation
        • How to do it...
        • How it works...
      • Setting up a build and test environment for tasks in this book
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
      • Running in the foreground with full debugging
        • How to do it...
        • How it works...
        • There's more...
      • Calculating ideal Initial Tokens for use with Random Partitioner
        • Getting ready
        • How to do it...
        • How it works
        • There's more...
        • See also...
      • Choosing Initial Tokens for use with Partitioners that preserve ordering
        • How to do it...
        • How it works...
        • There's more...
      • Insight into Cassandra with JConsole
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Connecting with JConsole over a SOCKS proxy
        • Getting ready
        • How to do it...
        • How it works...
      • Connecting to Cassandra with Java and Thrift
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
    • 2. The Command-line Interface
      • Connecting to Cassandra with the CLI
        • How to do it...
        • How it works...
      • Creating a keyspace from the CLI
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Creating a column family with the CLI
        • Getting ready
        • How to do it...
        • See also...
      • Describing a keyspace
        • How to do it...
        • How it works...
      • Writing data with the CLI
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
      • Reading data with the CLI
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Deleting rows and columns from the CLI
        • How to do it...
        • How it works...
        • See also...
      • Listing and paginating all rows in a column family
        • Getting ready
        • How to do it...
        • How it works...
      • Dropping a keyspace or a column family
        • How to do it...
        • How it works...
        • See also...
      • CLI operations with super columns
        • How to do it...
        • How it works...
        • There's more...
      • Using the assume keyword to decode column names or column values
        • How to do it...
        • How it works...
        • There's more...
      • Supplying time to live information when inserting columns
        • How to do it...
        • See also...
      • Using built-in CLI functions
        • How to do it...
        • How it works...
      • Using column metadata and comparators for type enforcement
        • How to do it...
        • How it works...
        • See also...
      • Changing the consistency level of the CLI
        • How to do it...
        • How it works...
        • See also...
      • Getting help from the CLI
        • How to do it...
        • How it works...
      • Loading CLI statements from a file
        • How to do it...
        • How it works...
        • There's more...
    • 3. Application Programmer Interface
      • Introduction
      • Connecting to a Cassandra server
        • How to do it...
        • How it works...
        • There's more
      • Creating a keyspace and column family from the client
        • How to do it...
        • How it works...
        • See also...
      • Using MultiGet to limit round trips and overhead
        • How to do it...
        • How it works...
      • Writing unit tests with an embedded Cassandra server
        • How to do it...
        • How it works...
        • See also...
      • Cleaning up data directories before unit tests
        • Getting ready
        • How to do it...
        • How it works...
      • Generating Thrift bindings for other languages (C++, PHP, and others)
        • Getting ready
        • How to do it...
        • How it works...
      • Using the Cassandra Storage Proxy "Fat Client"
        • How to do it...
        • How it works...
        • There's more...
      • Using range scans to find and remove old data
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Iterating all the columns of a large key
        • How to do it...
        • How it works...
      • Slicing columns in reverse
        • Getting ready
        • How to do it...
        • How it works...
      • Batch mutations to improve insert performance and code robustness
        • How to do it...
        • How it works...
        • See also...
      • Using TTL to create columns with self-deletion times
        • How to do it...
        • How it works...
        • See also...
      • Working with secondary indexes
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
    • 4. Performance Tuning
      • Introduction
      • Choosing an operating system and distribution
        • How to do it...
        • How it works...
        • There's more...
      • Choosing a Java Virtual Machine
        • How to do it...
        • There's more...
        • See also...
      • Using a dedicated Commit Log disk
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Choosing a high performing RAID level
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
          • Software v/s hardware RAID
          • Disk performance testing
        • See also...
      • File system optimization for hard disk performance
        • Getting ready
        • How to do it...
        • How it works...
      • Boosting read performance with the Key Cache
        • Getting ready
        • How to do it...
        • How it works
        • There's more...
        • See also...
      • Boosting read performance with the Row Cache
        • How to do it...
        • How it works...
        • There's more...
      • Disabling Swap Memory for predictable performance
        • How to do it...
        • How it works...
        • See also...
      • Stopping Cassandra from using swap without disabling it system-wide
        • Getting ready
        • How to do it...
      • Enabling Memory Mapped Disk modes
        • Getting ready
        • How to do it...
        • How it works...
      • Tuning Memtables for write-heavy workloads
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Saving memory on 64 bit architectures with compressed pointers
        • Getting ready
        • How to do it...
        • How it works...
      • Tuning concurrent readers and writers for throughput
        • How to do it...
        • How it works...
      • Setting compaction thresholds
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Garbage collection tuning to avoid JVM pauses
        • How to do it...
        • How it works...
        • There's more...
          • Large memory systems
        • See also...
        • There's more
      • Raising the open file limit to deal with many clients
        • How to do it...
        • How it works...
        • There's more...
      • Increasing performance by scaling up
        • How to do it...
        • How it works...
        • Enabling Network Time Protocol on servers and clients
        • Getting ready
        • How to do it...
        • How it works...
    • 5. Consistency, Availability, and Partition Tolerance with Cassandra
      • Introduction
      • Working with the formula for strong consistency
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Supplying the timestamp value with write requests
        • How to do it...
        • How it works...
        • There's more...
      • Disabling the hinted handoff mechanism
        • How to do it...
        • How it works...
        • There's more...
      • Adjusting read repair chance for less intensive data reads
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Confirming schema agreement across the cluster
        • How to do it...
        • How it works...
        • There's more...
      • Adjusting replication factor to work with quorum
        • How to do it...
        • How it works...
        • See also...
      • Using write consistency ONE, read consistency ONE for low latency operations
        • How to do it...
        • How it works...
        • There's more...
      • Using write consistency QUORUM, read consistency QUORUM for strong consistency
        • Getting ready
        • How to do it...
        • How it works...
      • Mixing levels write consistency QUORUM, read consistency ONE
        • Getting ready
        • How to do it...
        • How it works...
      • Choosing consistency over availability consistency ALL
        • How to do it...
        • How it works...
      • Choosing availability over consistency with write consistency ANY
        • How to do it...
        • How it works...
      • Demonstrating how consistency is not a lock or a transaction
        • How to do it...
        • How it works...
        • See also...
    • 6. Schema Design
      • Introduction
      • Saving disk space by using small column names
        • How to do it...
        • How it works...
      • Serializing data into large columns for smaller index sizes
        • How to do it...
        • How it works...
        • There's more...
      • Storing time series data effectively
        • How to do it...
        • How it works...
      • Using Super Columns for nested maps
        • How to do it...
        • How it works...
        • There's more...
      • Using a lower Replication Factor for disk space saving and performance enhancements
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Hybrid Random Partitioner using Order Preserving Partitioner
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
          • Scripting a multiple instance installation with OOP
          • Using different hash algorithms
      • Storing large objects
        • How to do it...
        • How it works...
        • There's more...
      • Using Cassandra for distributed caching
        • How to do it...
        • How it works...
      • Storing large or infrequently accessed data in a separate column family
        • How to do it...
        • How it works...
      • Storing and searching edge graph data in Cassandra
        • Getting ready
        • How to do it...
        • How it works...
      • Developing secondary data orderings or indexes
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
    • 7. Administration
      • Defining seed nodes for Gossip Communication
        • Getting ready
        • How to do it...
        • There's more
          • IP vs Hostname
          • Keep the seed list synchronized
          • Seed nodes do not auto bootstrap
          • Choosing the correct number of seed nodes
      • Nodetool Move: Moving a node to a specific ring location
        • Getting ready
        • How to do it...
        • How it works...
      • Nodetool Remove: Removing a downed node
        • How to do it...
        • How it works...
        • See also...
      • Nodetool Decommission: Removing a live node
        • How to do it...
        • How it works...
      • Joining nodes quickly with auto_bootstrap set to false
      • Generating SSH keys for password-less interaction
        • How to do it...
        • How it works...
        • There's more...
          • Normal write traffic
          • Read Repair
          • Anti-Entropy Repair
        • How to do it...
        • How it works...
        • These is more
      • Copying the data directory to new hardware
        • Getting ready
        • How to do it...
        • How it works...
        • There's more
      • A node join using external data copy methods
        • Getting ready
        • How to do it...
        • How it works...
      • Nodetool Repair: When to use anti-entropy repair
        • How to do it...
        • How it works...
        • There's more...
          • Raising the Replication Factor
          • Joining nodes without auto-bootstrap
          • Loss of corrupted files
      • Nodetool Drain: Stable files on upgrade
        • How to do it...
        • How it works...
      • Lowering gc_grace for faster tombstone cleanup
        • How to do it...
        • How it works...
        • There's more...
          • Data resurrection
      • Scheduling Major Compaction
        • How to do it...
        • How it works...
        • There's more...
      • Using nodetool snapshot for backups
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Clearing snapshots with nodetool clearsnapshot
        • Getting ready
        • How to do it...
        • How it works...
      • Restoring from a snapshot
        • How to do it...
        • How it works...
        • There's more...
      • Exporting data to JSON with sstable2json
        • How to do it...
        • How it works...
        • There's more...
          • Extracting specific keys
          • Excluding specific keys
          • Saving the exported JSON to a file
          • Using the xxd command to decode hex values
      • Nodetool cleanup: Removing excess data
        • How to do it...
        • How it works...
        • There's more...
          • Topology changes
          • Hinted handoff and write consistency ANY
        • See also...
      • Nodetool Compact: Defragment data and remove deleted data from disk
        • How to do it...
        • How it works...
        • See also...
    • 8. Multiple Datacenter Deployments
      • Changing debugging to determine where read operations are being routed
        • How to do it...
        • How it works...
        • See also...
      • Using IPTables to simulate complex network scenarios in a local environment
        • Getting ready
        • How to do it...
        • How it works...
      • Choosing IP addresses to work with RackInferringSnitch
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Scripting a multiple datacenter installation
        • Getting ready
        • How to do it...
        • How it works...
      • Determining natural endpoints, datacenter, and rack for a given key
        • How to do it...
        • How it works...
        • See also...
      • Manually specifying Rack and Datacenter configuration with a property file snitch
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
      • Troubleshooting dynamic snitch using JConsole
        • Getting ready
        • How to do it...
        • How it works...
      • Quorum operations in multi-datacenter environments
        • Getting ready
        • How it works...
      • Using traceroute to troubleshoot latency between network devices
        • How to do it...
        • How it works...
      • Ensuring bandwidth between switches in multiple rack environments
        • How to do it...
        • There's more...
      • Increasing rpc_timeout for dealing with latency across datacenters
        • How to do it...
        • How it works...
      • Changing consistency level from the CLI to test various consistency levels with multiple datacenter deployments
        • Getting ready
        • How to do it...
        • How it works...
      • Using the consistency levels TWO and THREE
        • Getting ready
        • How to do it...
        • How it works...
      • Calculating Ideal Initial Tokens for use with Network Topology Strategy and Random Partitioner
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
          • More than two datacenters
          • Datacenters with differing numbers of nodes
          • Endpoint Snitch
        • See also...
    • 9. Coding and Internals
      • Introduction
      • Installing common development tools
        • How to do it...
        • How it works...
      • Building Cassandra from source
        • How to do it...
        • How it works...
        • See also...
      • Creating your own type by sub classing abstract type
        • How to do it...
        • How it works...
        • See also...
      • Using the validation to check data on insertion
        • Getting ready
        • How to do it...
        • How it works...
      • Communicating with the Cassandra developers and users through IRC and e-mail
        • How to do it...
        • How it works...
      • Generating a diff using subversion's diff feature
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Applying a diff using the patch command
        • Before you begin...
        • How to do it...
        • How it works...
      • Using strings and od to quickly search through data files
        • How to do it...
        • How it works...
      • Customizing the sstable2json export utility
        • How to do it...
        • How it works...
        • There's more...
      • Configure index interval ratio for lower memory usage
        • How to do it...
        • How it works...
      • Increasing phi_convict_threshold for less reliable networks
        • How to do it...
        • How it works...
        • There's more...
      • Using the Cassandra maven plugin
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
    • 10. Libraries and Applications
      • Introduction
      • Building the contrib stress tool for benchmarking
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Inserting and reading data with the stress tool
        • Before you begin...
        • How to do it...
        • How it works...
        • There's more...
        • See also...
      • Running the Yahoo! Cloud Serving Benchmark
        • How to do it...
        • How it works...
        • There's more...
      • Hector, a high-level client for Cassandra
        • How to do it...
        • How it works...
        • There's more...
      • Doing batch mutations with Hector
        • How to do it...
        • How it works...
      • Cassandra with Java Persistence Architecture (JPA)
        • Before you begin...
        • How to do it...
        • How it works...
        • There's more...
      • Setting up Solandra for full text indexing with a Cassandra backend
        • How to do it...
        • How it works...
      • Setting up Zookeeper to support Cages for transactional locking
        • How to do it...
        • How it works...
        • See also...
      • Using Cages to implement an atomic read and set
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
      • Using Groovandra as a CLI alternative
        • How to do it...
        • How it works...
      • Searchable log storage with Logsandra
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
    • 11. Hadoop and Cassandra
      • Introduction
      • A pseudo-distributed Hadoop setup
        • How to do it...
        • How it works...
        • There's more...
      • A Map-only program that reads from Cassandra using the ColumnFamilyInputFormat
        • How to do it...
        • How it works...
        • See also...
      • A Map-only program that writes to Casandra using the CassandraOutputFormat
        • Getting ready
        • How to do it...
        • How it works...
      • Using MapReduce to do grouping and counting with Cassandra input and output
        • Getting ready
        • How to do it...
        • How it works...
      • Setting up Hive with Cassandra Storage Handler support
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Defining a Hive table over a Cassandra Column Family
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Joining two Column Families with Hive
        • Getting ready
        • How to do it...
        • How it works...
      • Grouping and counting column values with Hive
        • How to do it...
        • How it works...
        • See also...
      • Co-locating Hadoop Task Trackers on Cassandra nodes
        • How to do it...
        • How it works...
        • See also...
      • Setting up a "Shadow" data center for running only MapReduce jobs
        • Getting ready
        • How to do it...
        • How it works...
        • There's more...
      • Setting up DataStax Brisk the combined stack of Cassandra, Hadoop, and Hive
        • How to do it...
        • How it works
    • 12. Collecting and Analyzing Performance Statistics
      • Finding bottlenecks with nodetool tpstats
        • How to do it...
        • How it works...
        • There's more...
      • Using nodetool cfstats to retrieve column family statistics
        • How to do it...
        • How it works...
        • See also...
      • Monitoring CPU utilization
        • How to do it...
        • How it works...
        • See also...
      • Adding read/write graphs to find active column families
        • How to do it...
        • How it works...
        • There's more...
      • Using Memtable graphs to profile when and why they flush
        • How it works...
        • There's more...
        • See also...
      • Graphing SSTable count
        • How to do it...
        • There's more...
      • Monitoring disk utilization and having a performance baseline
        • How to do it...
        • How it works...
        • There's more...
        • See also...
        • How to do it...
        • How it works...
        • See also...
      • Monitoring compaction by graphing its activity
        • How it works...
        • There's more...
        • See also...
      • Using nodetool compaction stats to check the progress of compaction
        • How to do it...
        • How it works...
      • Graphing column family statistics to track average/max row sizes
        • How to do it...
      • Using latency graphs to profile time to seek keys
        • How to do it...
        • How it works...
      • Tracking the physical disk size of each column family over time
        • How to do it...
        • How it works...
      • Using nodetool cfhistograms to see the distribution of query latencies
        • How to do it...
        • How it works...
        • See also...
      • Tracking open networking connections
        • How to do it...
        • How it works...
        • There's more...
    • 13. Monitoring Cassandra Servers
      • Introduction
      • Forwarding Log4j logs to a central sever
        • Getting ready
        • How to do it...
        • How it works...
        • Theres more....
      • Using top to understand overall performance
        • How to do it...
        • How it works...
        • There's more...
      • Using iostat to monitor current disk performance
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Using sar to review performance over time
        • Getting ready
        • How to do it...
        • How it works...
      • Using JMXTerm to access Cassandra JMX
        • Getting ready
        • How to do it...
        • How it works...
        • See also...
      • Monitoring the garbage collection events
        • How to do it...
        • How it works...
        • Theres more...
      • Using tpstats to find bottlenecks
        • How to do it...
        • How it works...
        • See also...
      • Creating a Nagios Check Script for Cassandra
        • How to do it...
      • Keep an eye out for large rows with compaction limits
        • How to do it...
        • How it works...
      • Reviewing network traffic with IPTraf
        • Getting ready
        • How to do it...
        • How it works...
      • Keep on the lookout for dropped messages
        • How to do it...
        • How it works...
      • Inspecting column families for dangerous conditions
        • How to do it...
        • How it works...
    • Index

Dodaj do koszyka Cassandra High Performance Cookbook. You can mine deep into the full capabilities of Apache Cassandra using the 150+ recipes in this indispensable Cookbook. From configuring and tuning to using third party applications, this is the ultimate guide

Code, Publish & WebDesing by CATALIST.com.pl



(c) 2005-2024 CATALIST agencja interaktywna, znaki firmowe należą do wydawnictwa Helion S.A.