Aug 12, 2024 to Aug 16, 2024
(Europe/Berlin / UTC200)

Add event to calendar


This course will take place as an on-site and in-person event. It is not possible to attend online.


An introduction to the parallel programming of supercomputers is given. The focus is on the usage of the Message Passing Interface (MPI), the most often used programming model for systems with distributed memory. Furthermore, OpenMP will be presented, which is often used on shared-memory architectures.

The first four days of the course consist of lectures and short exercises. A fifth day is devoted to demonstrating the use of MPI and OpenMP in a larger context. To this end, starting from a simple but representative serial algorithm, a parallel version will be designed and implemented using the techniques presented in the course.

Topics covered:

Fundamentals of HPC: system architectures, shared and distributed memory concepts.

OpenMP: introduction, parallel constructs, loop and task sharing.

MPI: introduction, point-to-point and collective communication, blocking and non-blocking, data types, I/O, communicators, thread compliance, tools.

Hands-on: tutorial from serial to parallel program.

Agenda see timetable for details.

Contents level

in hours

in %

Beginner's contents:


47 %

Intermediate contents:


30 %

Advanced contents:


23 %

Community-targeted contents:


0 %


Knowledge of either C, C++, Python, or Fortran, basic knowledge of UNIX/Linux and a UNIX standard editor (e.g. vi, emacs)

Target audience:

Supercomputer users


This course is given in English.


5 days


12-16 August 2024, 09:00-16:30 each day


tbd (planned as on-site course)

Number of Participants:

minimum 5, maximum 26


Ilya Zhukov, Dr. Jolanta Zjupa, Junxian Chew, JSC

Course material of the last course:

Slides, exercises and tutorials