Parallel Programming

Computers are not getting intrinsically faster. Physical considerations prevent us of seen computers with 10 GHz processors. Instead, we see computers with more and more cores. Parallel computing is the way of use several processing devices to accelerate computations. There are several strategies for parallel computing and this lesson present the most important ones nowadays.

Prerequisites

This tutorial requires familiarity with command line interface, examples with binary formats will use C or Fortran. Both OpenMP directives and MPI libraries where created with these languages in mind. Similarly, cuda examples uses C syntax and the Nvidia compiler for them.

Schedule

Setup Download files required for the lesson
09:00 1. Introduction to Parallel Computing What is parallel computing, do I really need to learn this?
10:00 2. Simple Parallelism (GNU Parallel, R parallel and Python Multiprocessing) What is embarrassing parallelism?
11:00 3. Multithreading (OpenMP) What is OpenMP?
12:00 4. Lunch Break Break
13:00 5. Distributed Computing (MPI) What is Message Passing Interface?
14:00 6. HPC Accelerators (Cuda) What is GPU computing?
15:00 Finish

The actual schedule may vary slightly depending on the topics and exercises chosen by the instructor.