Principles of Parallelism

Course Description:

The Principles of Parallelism course covers basics of parallel program design. It provides an overview of parallel platforms, quantifies overheads of parallel operations and metrics for performance and scalability of parallel programs. 

The course also describes the methodologies for parallel program design and concludes with a detailed description of APIs for programming message passing platforms using MPI and shared address space platforms using POSIX Threads (Pthreads) and OpenMP.

Students will Learn:

  • To design a parallel formulation and program it on a parallel platform when presented with an algorithm

Course Modules:

Module 1 – Hardware Platforms & Cost Models

  • Learning Objective: Engage in parallel computing, explore the organization of parallel platforms, analyze cache coherence and snoopy cache systems, and assess communication costs and mapping techniques for networks.

Module 2 – Principles of Parallel Algorithm Design

  • Learning Objective: Evaluate parallel algorithm design, graph critical path length, compare decomposition techniques, compute data partitioning, analyze characteristics of various tasks, and map cyclic/block cyclic distributions.

Module 3 – Collective Communication Operations

  • Learning Objective: Identify basic communication operations in parallel programming, describe all-to-all broadcast and reduction, compare all-reduce and prefix-sum operations, evaluate an all-to-all personalized communication operation, and calculate circular shift.

Module 4 – Analytical Models of Parallel Systems

  • Learning Objective: Define analytical modeling, measure performance metrics (execution time, speed up, efficiency, and cost), compare metrics of scalability, and assess other metrics for parallel systems.

Module 5 – Programming Message Passing & Shared Address Space

  • Learning Objective: Analyze message passing and the MPI library, identify topologies and collective communications operations, test shared address space programming, explain synchronization principles, and utilize open MPI.

Recommended Background:

  • Undergraduate and graduate students with focus areas on large-scale application development, in design, simulation or data analytics
  • Computer scientists or engineers with programming backgrounds

Other Courses in the Series:

Badge Awarded: 

Related Courses

Course Finder

Refine your search using the tool below.

Filter By: