An Introduction to
Parallel and Distributed Systems:

                  

"As a cell design becomes more complex and interconnected a critical point is reached where a more integrated cellular organization emerges, and vertically generated novelty can and does assume greater importance."    

Carl Woese
Professor of Microbology, University of Illinois

"At the highest level, we're looking at 'scaling out' (vs. 'scaling up,' as in frequency), with multicore architecture. Basically, instead of having one big x86 processor, you could have 16, 32, 64, and so on, up to maybe 256 small x86 processors on one die. We'll have the transistor count (thanks to Moore's Law) to do incredible things we could only dream about a few years ago."

    "In future architectures, we'll have multiple devices working in parallel. That means you can break a problem up to be solved in pieces. In these new architectures, the biggest problem won't be building the hardware. The biggest challenge will be helping people configure algorithms to solve problems in parallel, and providing the software tools to extract that inherent parallelism out of the programs."

Stephen S. Pawlowski
Chief Technology Officer of Intel's Digital Enterprise Group
                  


  1. Background
    1. Processes and Threads,sockets and shared memory.
    2. Parallel Computing
    3. Hardware Parallelism
    4. Functional Parallelism
  2.  

  3. Super Computers and General Purpose Parallel Processing Computers
    1. A General Purpose Parallel Processing Computer - The Beowulf Cluster
    2. Parallel Architectures - The Course Textbook, Chapter 2
  4.  

  5. MPI The Message Passing Interface
    1. hello_COMM_WORLD.c - Ordered Message Passing
      1. The Terminal View
      2. A Communications Sketch
      3. The Code
      4. WORLD_hello_COMM.c - First Come First Serve
            The Terminal View - The Code
    2. Computing pi
      1. Numerical Integration
      2. pirules.c - Broadcast-Collect
    3. Notes from Chapter 4 of the Textbook
    4. Finding Primes
      1. Moore's Law and Computational Complexity
      2. Guest Lecture - Professor Bhatia
      3. The Gnu Multiple Precision arithmetic library
      4. prime-gmpmpi.c
    5. Debugging MPI Programs
      1. Debugging The Old Fashoned Way - with printf()
      2. Using TotalView®

     

  6. Parallel Algorithm Design
    1. Foster's Design Methodology -The Course Text, Chapter 3
    2. Limiting Factors in Massively Parallel Processing - Ahmdal's Law-The Course Text, Chapter 7
    3. ? Defeating RSA ?
  7.  

  8. Monte Carlo Methods
    1. Mathematical Background
    2. Monte Carlo Methods-The Course Text, Chapter 10
    3. pimonte_simple.c
    4. pimonte_mpi.c
    5. seed_mpi.c
    6. pisprng.c
    7. PRNGs
    8. The Scalable Parallel Random Number Generators library(2.0)
    9. The Scalable Parallel Random Number Generators library(4.0)
  9.  

  10. Linear Systems and Some Analysis of Parallel Algorithms
    1. A Review of Linear Algebra
    2. Three Applications of Matrix Algebra
    3. Matrix - Vector Multiplication The Course Text, Chapter 8
    4. The MPI Alls
    5. Communications Patterns
    6. Under the Hood
  11.  

  12. Parallel Programming on Hybrid Systems - OpenMP and MPI

    1.   Tutorials and Guides

    2.   Some Examples
    3. Shared-Memory Programming- the Course Text Chapter 17
    4. Some other pragmas
    5. The API Specification
    6. OpenMP and MPI- the Course Text Chapter 18

Resources:
  1. Voyager Notes

  2. logpirules notes

  3. pirules code revisited

  4. Lecture Notes, Tutorials, and Reference Materials:
  5. Development Tools and Software:

  6. Software Libraries:
  7. C to HTML translations were done using CToHTML
  8. The WhiteBoard

  9. The Textbook for this course is:
      
    Parallel Programming
       in C with MPI and OpenMP
           Michael J. Quinn
           McGraw-Hill 2004