High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI - Helion
ISBN: 978-05-965-5245-9
stron: 370, Format: ebook
Data wydania: 2004-11-16
Księgarnia: Helion
Cena książki: 118,15 zł (poprzednio: 137,38 zł)
Oszczędzasz: 14% (-19,23 zł)
To the outside world, a "supercomputer" appears to be a single system. In fact, it's a cluster of computers that share a local area network and have the ability to work together on a single problem as a team. Many businesses used to consider supercomputing beyond the reach of their budgets, but new Linux applications have made high-performance clusters more affordable than ever. These days, the promise of low-cost supercomputing is one of the main reasons many businesses choose Linux over other operating systems.This new guide covers everything a newcomer to clustering will need to plan, build, and deploy a high-performance Linux cluster. The book focuses on clustering for high-performance computation, although much of its information also applies to clustering for high-availability (failover and disaster recovery). The book discusses the key tools you'll need to get started, including good practices to use while exploring the tools and growing a system. You'll learn about planning, hardware choices, bulk installation of Linux on multiple systems, and other basic considerations. Then, you'll learn about software options that can save you hours--or even weeks--of deployment time.Since a wide variety of options exist in each area of clustering software, the author discusses the pros and cons of the major free software projects and chooses those that are most likely to be helpful to new cluster administrators and programmers. A few of the projects introduced in the book include:
- MPI, the most popular programming library for clusters. This book offers simple but realistic introductory examples along with some pointers for advanced use.
- OSCAR and Rocks, two comprehensive installation and administrative systems
- openMosix (a convenient tool for distributing jobs), Linux kernel extensions that migrate processes transparently for load balancing
- PVFS, one of the parallel filesystems that make clustering I/O easier
- C3, a set of commands for administering multiple systems
Osoby które kupowały "High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI", wybierały także:
- Bash. Techniki zaawansowane. Kurs video. Zostań administratorem systemów IT 169,00 zł, (50,70 zł -70%)
- Administracja systemem Linux. Kurs video. Przewodnik dla początkujących 59,00 zł, (17,70 zł -70%)
- Gray Hat C#. Język C# w kontroli i łamaniu zabezpieczeń 57,74 zł, (17,90 zł -69%)
- Python dla administrator 178,97 zł, (62,64 zł -65%)
- Cybersecurity dla ka 144,86 zł, (52,15 zł -64%)
Spis treści
High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI eBook -- spis treści
- High Performance Linux Clusters with OSCAR, Rocks, OpenMosix, and MPI
- SPECIAL OFFER: Upgrade this ebook with OReilly
- A Note Regarding Supplemental Files
- Preface
- Audience
- Organization
- Conventions
- How to Contact Us
- Using Code Examples
- Acknowledgments
- I. An Introduction to Clusters
- 1. Cluster Architecture
- 1.1. Modern Computing and the Role of Clusters
- 1.1.1. Uniprocessor Computers
- 1.1.2. Multiple Processors
- 1.1.2.1. Centralized multiprocessors
- 1.1.2.2. Multicomputers
- 1.1.2.3. Cluster structure
- 1.2. Types of Clusters
- 1.3. Distributed Computing and Clusters
- 1.4. Limitations
- 1.4.1. Amdahls Law
- 1.5. My Biases
- 1.1. Modern Computing and the Role of Clusters
- 2. Cluster Planning
- 2.1. Design Steps
- 2.2. Determining Your Cluster's Mission
- 2.2.1. What Is Your User Base?
- 2.2.2. How Heavily Will the Cluster Be Used?
- 2.2.3. What Kinds of Software Will You Run on the Cluster?
- 2.2.4. How Much Control Do You Need?
- 2.2.5. Will This Be a Dedicated or Shared Cluster?
- 2.2.6. What Resources Do You Have?
- 2.2.7. How Will Cluster Access Be Managed?
- 2.2.8. What Is the Extent of Your Cluster?
- 2.2.9. What Security Concerns Do You Have?
- 2.3. Architecture and Cluster Software
- 2.3.1. System Software
- 2.3.2. Programming Software
- 2.3.3. Control and Management
- 2.4. Cluster Kits
- 2.5. CD-ROM-Based Clusters
- 2.5.1. BCCD
- 2.6. Benchmarks
- 3. Cluster Hardware
- 3.1. Design Decisions
- 3.1.1. Node Hardware
- 3.1.1.1. CPUs and motherboards
- 3.1.1.2. Memory and disks
- 3.1.1.3. Monitors, keyboards, and mice
- 3.1.1.4. Adapters, power supplies, and cases
- 3.1.2. Cluster Head and Servers
- 3.1.3. Cluster Network
- 3.1.1. Node Hardware
- 3.2. Environment
- 3.2.1. Cluster Layout
- 3.2.2. Power and Air Conditioning
- 3.2.2.1. Power
- 3.2.2.2. HVAC
- 3.2.3. Physical Security
- 3.1. Design Decisions
- 4. Linux for Clusters
- 4.1. Installing Linux
- 4.1.1. Selecting a Distribution
- 4.1.2. Downloading Linux
- 4.1.3. What to Install?
- 4.2. Configuring Services
- 4.2.1. DHCP
- 4.2.2. NFS
- 4.2.2.1. Running NFS
- 4.2.2.2. Automount
- 4.2.3. Other Cluster File System
- 4.2.4. SSH
- 4.2.4.1. Using SSH
- 4.2.5. Other Services and Configuration Tasks
- 4.2.5.1. Apache
- 4.2.5.2. Network Time Protocol (NTP)
- 4.2.5.3. Virtual Network Computing (VNC)
- 4.2.5.4. Multicasting
- 4.2.5.5. Hosts file and name services
- 4.3. Cluster Security
- 4.1. Installing Linux
- 1. Cluster Architecture
- II. Getting Started Quickly
- 5. openMosix
- 5.1. What Is openMosix?
- 5.2. How openMosix Works
- 5.3. Selecting an Installation Approach
- 5.4. Installing a Precompiled Kernel
- 5.4.1. Downloading
- 5.4.2. Installing
- 5.4.3. Configuration Changes
- 5.5. Using openMosix
- 5.5.1. User Tools
- 5.5.1.1. mps and mtop
- 5.5.1.2. migrate
- 5.5.1.3. mosctl
- 5.5.1.4. mosmon
- 5.5.1.5. mosrun
- 5.5.1.6. setpe
- 5.5.2. openMosixView
- 5.5.3. Testing openMosix
- 5.5.1. User Tools
- 5.6. Recompiling the Kernel
- 5.7. Is openMosix Right for You?
- 6. OSCAR
- 6.1. Why OSCAR?
- 6.2. What's in OSCAR
- 6.3. Installing OSCAR
- 6.3.1. Prerequisites
- 6.3.2. Network Configuration
- 6.3.3. Loading Software on Your Server
- 6.3.4. A Basic OSCAR Installation
- 6.3.4.1. Step 0: Downloading additional packages
- 6.3.4.2. Step 1: Package selection
- 6.3.4.3. Step 2: Configuring packages
- 6.3.4.4. Step 3: Installing server software
- 6.3.4.5. Step 4: Building a client image
- 6.3.4.6. Step 5: Defining clients
- 6.3.4.7. Step 6: Setting up the network
- 6.3.4.8. Step 7: Completing the setup
- 6.3.4.9. Step 8: Testing
- 6.3.5. Custom Installations
- 6.3.6. Changes OSCAR Makes
- 6.3.7. Making Changes
- 6.4. Security and OSCAR
- 6.4.1. pfilter
- 6.4.2. SSH and OPIUM
- 6.5. Using switcher
- 6.6. Using LAM/MPI with OSCAR
- 7. Rocks
- 7.1. Installing Rocks
- 7.1.1. Prerequisites
- 7.1.2. Downloading Rocks
- 7.1.3. Installing the Frontend
- 7.1.4. Install Compute Nodes
- 7.1.5. Customizing the Frontend
- 7.1.5.1. User management with 411
- 7.1.5.2. X Window System
- 7.1.6. Customizing Compute Nodes
- 7.1.6.1. Adding packages
- 7.1.6.2. Changing disk partitions
- 7.1.6.3. Other changes
- 7.2. Managing Rocks
- 7.3. Using MPICH with Rocks
- 7.1. Installing Rocks
- 5. openMosix
- III. Building Custom Clusters
- 8. Cloning Systems
- 8.1. Configuring Systems
- 8.1.1. Distributing Files
- 8.1.1.1. Pushing files with rsync
- 8.1.1. Distributing Files
- 8.2. Automating Installations
- 8.2.1. Kickstart
- 8.2.1.1. Configuration file
- 8.2.1.2. Using Kickstart
- 8.2.2. g4u
- 8.2.3. SystemImager
- 8.2.3.1. Image server setup
- 8.2.3.2. Golden client setup
- 8.2.3.3. Retrieving the image
- 8.2.3.4. Cloning the systems
- 8.2.3.5. Other tasks
- 8.2.1. Kickstart
- 8.3. Notes for OSCAR and Rocks Users
- 8.1. Configuring Systems
- 9. Programming Software
- 9.1. Programming Languages
- 9.2. Selecting a Library
- 9.3. LAM/MPI
- 9.3.1. Installing LAM/MPI
- 9.3.2. User Configuration
- 9.3.3. Using LAM/MPI
- 9.3.4. Testing the Installation
- 9.4. MPICH
- 9.4.1. Installing
- 9.4.2. User Configuration
- 9.4.3. Using MPICH
- 9.4.4. Testing the Installation
- 9.4.5. MPE
- 9.5. Other Programming Software
- 9.5.1. Debuggers
- 9.5.2. HDF5
- 9.5.3. SPRNG
- 9.6. Notes for OSCAR Users
- 9.6.1. Adding MPE
- 9.7. Notes for Rocks Users
- 10. Management Software
- 10.1. C3
- 10.1.1. Installing C3
- 10.1.2. Using C3 Commands
- 10.1.2.1. cexec
- 10.1.2.2. cget
- 10.1.2.3. ckill
- 10.1.2.4. cpush
- 10.1.2.5. crm
- 10.1.2.6. cshutdown
- 10.1.2.7. clist, cname, and cnum
- 10.1.2.8. Further examples and comments
- 10.2. Ganglia
- 10.2.1. Installing and Using Ganglia
- 10.2.1.1. RRDTool
- 10.2.1.2. Apache and PHP
- 10.2.1.3. Ganglia monitor core
- 10.2.1.4. Web frontend
- 10.2.1. Installing and Using Ganglia
- 10.3. Notes for OSCAR and Rocks Users
- 10.1. C3
- 11. Scheduling Software
- 11.1. OpenPBS
- 11.1.1. Architecture
- 11.1.2. Installing OpenPBS
- 11.1.3. Configuring PBS
- 11.1.4. Managing PBS
- 11.1.5. Using PBS
- 11.1.6. PBS's GUI
- 11.1.7. Maui Scheduler
- 11.2. Notes for OSCAR and Rocks Users
- 11.1. OpenPBS
- 12. Parallel Filesystems
- 12.1. PVFS
- 12.1.1. Installing PVFS on the Head Node
- 12.1.2. Configuring the Metadata Server
- 12.1.3. I/O Server Setup
- 12.1.4. Client Setup
- 12.1.5. Running PVFS
- 12.1.5.1. Troubleshooting
- 12.2. Using PVFS
- 12.3. Notes for OSCAR and Rocks Users
- 12.1. PVFS
- 8. Cloning Systems
- IV. Cluster Programming
- 13. Getting Started with MPI
- 13.1. MPI
- 13.1.1. Core MPI
- 13.1.1.1. MPI_Init
- 13.1.1.2. MPI_Finalize
- 13.1.1.3. MPI_Comm_size
- 13.1.1.4. MPI_Comm_rank
- 13.1.1.5. MPI_Get_processor_name
- 13.1.1. Core MPI
- 13.2. A Simple Problem
- 13.2.1. Background
- 13.2.2. Single-Processor Program
- 13.3. An MPI Solution
- 13.3.1. A C Solution
- 13.3.2. Transferring Data
- 13.3.2.1. MPI_Send
- 13.3.2.2. MPI_Recv
- 13.3.3. MPI Using FORTRAN
- 13.3.4. MPI Using C++
- 13.4. I/O with MPI
- 13.5. Broadcast Communications
- 13.5.1. Broadcast Functions
- 13.5.1.1. MPI_Bcast
- 13.5.1.2. MPI_Reduce
- 13.5.1. Broadcast Functions
- 13.1. MPI
- 14. Additional MPI Features
- 14.1. More on Point-to-Point Communication
- 14.1.1. Non-Blocking Communication
- 14.1.1.1. MPI_Isend and MPI_Irecv
- 14.1.1.2. MPI_Wait
- 14.1.1.3. MPI_Test
- 14.1.1.4. MPI_Iprobe
- 14.1.1.5. MPI_Cancel
- 14.1.1.6. MPI_Sendrecv and MPI_Sendrecv_replace
- 14.1.1. Non-Blocking Communication
- 14.2. More on Collective Communication
- 14.2.1. Gather and Scatter
- 14.2.1.1. MPI_Gather
- 14.2.1.2. MPI_Scatter
- 14.2.1. Gather and Scatter
- 14.3. Managing Communicators
- 14.3.1. Communicator Commands
- 14.3.1.1. MPI_Comm_group
- 14.3.1.2. MPI_Group_incl and MPI_Group_excl
- 14.3.1.3. MPI_Comm_create
- 14.3.1.4. MPI_Comm_free and MPI_Group_free
- 14.3.1.5. MPI_Comm_split
- 14.3.1. Communicator Commands
- 14.4. Packaging Data
- 14.4.1. User-Defined Types
- 14.4.1.1. MPI_Type_struct
- 14.4.1.2. MPI_Type_commit
- 14.4.2. Packing Data
- 14.4.2.1. MPI_Pack
- 14.4.2.2. MPI_Unpack
- 14.4.1. User-Defined Types
- 14.1. More on Point-to-Point Communication
- 15. Designing Parallel Programs
- 15.1. Overview
- 15.2. Problem Decomposition
- 15.2.1. Decomposition Strategies
- 15.2.1.1. Data decomposition
- 15.2.1.2. Control decomposition
- 15.2.1. Decomposition Strategies
- 15.3. Mapping Tasks to Processors
- 15.3.1. Communication Overhead
- 15.3.2. Load Balancing
- 15.4. Other Considerations
- 15.4.1. Parallel I/O
- 15.4.2. MPI-IO Functions
- 15.4.2.1. MPI_File_open
- 15.4.2.2. MPI_File_seek
- 15.4.2.3. MPI_File_read
- 15.4.2.4. MPI_File_close
- 15.4.3. Random Numbers
- 16. Debugging Parallel Programs
- 16.1. Debugging and Parallel Programs
- 16.2. Avoiding Problems
- 16.3. Programming Tools
- 16.4. Rereading Code
- 16.5. Tracing with printf
- 16.6. Symbolic Debuggers
- 16.6.1. gdb
- 16.6.2. ddd
- 16.7. Using gdb and ddd with MPI
- 16.8. Notes for OSCAR and Rocks Users
- 17. Profiling Parallel Programs
- 17.1. Why Profile?
- 17.2. Writing and Optimizing Code
- 17.3. Timing Complete Programs
- 17.4. Timing C Code Segments
- 17.4.1. Manual Timing with MPI
- 17.4.2. MPI Functions
- 17.4.2.1. MPI_Wtime
- 17.4.2.2. MPI_Wtick
- 17.4.2.3. MPI_Barrier
- 17.4.3. PMPI
- 17.5. Profilers
- 17.5.1. gprof
- 17.5.2. gcov
- 17.5.3. Profiling Parallel Programs with gprof and gcov
- 17.6. MPE
- 17.6.1. Using MPE
- 17.7. Customized MPE Logging
- 17.8. Notes for OSCAR and Rocks Users
- 13. Getting Started with MPI
- V. Appendix
- A. References
- A.1. Books
- A.2. URLs
- A.2.1. General Cluster Information
- A.2.2. Linux
- A.2.3. Cluster Software
- A.2.4. Grid Computing and Tools
- A.2.5. Cloning and Management Software
- A.2.6. Filesystems
- A.2.7. Parallel Benchmarks
- A.2.8. Programming Software
- A.2.9. Scheduling Software
- A.2.10. System Software and Utilities
- A. References
- About the Author
- Colophon
- SPECIAL OFFER: Upgrade this ebook with OReilly