Speaker
Thomas Ericsson
(Chalmers)
Description
Two common ways of speeding up programs are to use
processes or threads executing in parallel. The second
week of the Summer School presents process-based
parallel computing using MPI.
In this lecture parallelization using threads will be
discussed. Threads can be managed in different ways,
e.g. using a low level library, like the POSIX threads
library. The lecture deals with OpenMP, which is a more
convenient way to use threads.
OpenMP is a specification for a set of compiler directives,
library routines, and environment variables that can be
used to specify shared memory parallelism in Fortran
and C/C++ programs. The threaded program can be
executed on a shared memory computer (e.g. a multi-
core processor).
Typically, the iterations of a time consuming loop are
shared among a team of threads, each thread working
on part of the iterations. The loop is parallelized by
writing a directive in the code and a compiler or a pre-
processor will generate parallel code.
Using short code examples, the lecture covers the most
important OpenMP-constructs, how to use them and
how not to use them. A few more realistic examples are
presented as well.