Parallel Programming: Glossary

Key Points

Introduction to Parallel Computing
  • Parallel computing start with identifying the concurrent opportunities of a given problem and decide the best paradigm to exploit the concurrency with the minimal programming effort.

Simple Parallelism (GNU Parallel, R parallel and Python Multiprocessing)
  • GNU parallel is just a tool to exploit embarrassing parallel problems. You can equally use multiprocessing in python or R parallel libraries for the same effect.

Multithreading (OpenMP)
  • Both GCC and Intel compilers offer OpenMP support. Loops can be processed in parallel if there is no data dependency between each iteration of the loop.

Distributed Computing (MPI)
  • MPI is the fact standard for large scale numerical computing, where a single node no matter how big, does not have the cores or memory able to run the simulation.

HPC Accelerators (Cuda)
  • GPU computing is nowadays the landmark of HPC clusters, the most powerful computers in the world rely on them for part of most of its multi PetaFLOP capabilities.

Glossary

FIXME