16โ€“27 Aug 2010
KTH main campus
Europe/Stockholm timezone

Session

MPI

23 Aug 2010, 09:15
E3 (KTH main campus)

E3

KTH main campus

KTH main campus Valhallavรคgen 79

Presentation materials

There are no materials yet.

  1. Lilit Axner (PDC - Center for High-Performance Computing)
    23/08/2010, 09:15
    Parallel Programming
    This lecture gives you an overview of MPI functionality and discuss the value of having a standard message passing library. It provides a tailored tour of the basic concepts and mechanisms that will be of most value to you as beginning MPI programmers. This includes gaining a familiarity with some of the most commonly used MPI calls, the way in which MPI is initialized and...
    Go to contribution page
  2. Lilit Axner (PDC - Center for High-Performance Computing)
    23/08/2010, 10:15
    Parallel Programming
    In point to point communication, one process sends a message and a second process receives it. This is in contrast to collective communication routines, in which a pattern of communication is established amongst a group of processes. This lecture will cover the different types of send and receive routines available for point to point communication.
    Go to contribution page
  3. Lilit Axner (PDC - Center for High-Performance Computing)
    23/08/2010, 13:15
    Parallel Programming
    Lab exercises to accompany the material on the basics of MPI programming
    Go to contribution page
  4. Lilit Axner (PDC - Center for High-Performance Computing)
    23/08/2010, 15:15
    Parallel Programming
    Lab exercises to accompany the material titled "Point-to- Point Communication I"
    Go to contribution page
  5. Lilit Axner (PDC - Center for High-Performance Computing)
    24/08/2010, 08:30
    Parallel Programming
    During this question and answer period, we will discuss questions that came up from yesterday's lectures and labs on MPI Basics and Point-to-Point Communication.
    Go to contribution page
  6. Michaela Lechner (PDC - Center for High-Performance Computing)
    24/08/2010, 09:15
    Parallel Programming
    MPI Point to Point Communication I introduced many of the routines related to sending a message between two processes, with a focus on programming with blocking and non-blocking routines. This lecture will discuss another important area: communication mode. Choosing a communication mode gives the programmer some control over how the system handles the message and can improve...
    Go to contribution page
  7. Michaela Lechner (PDC - Center for High-Performance Computing)
    24/08/2010, 10:15
    Parallel Programming
    There are certain communication patterns that appear in many different types of applications. Rather than requiring each programmer to code these using point to point communication, MPI provides routines that handle these patterns for you, called collective communication routines. This lecture will survey these routines, the communication pattern established, and their syntax.
    Go to contribution page
  8. Michaela Lechner (PDC - Center for High-Performance Computing)
    24/08/2010, 13:15
    Parallel Programming
    The lab that accompanies the material titled "Point-to-Point Communication II"
    Go to contribution page
  9. Michaela Lechner (PDC - Center for High-Performance Computing)
    24/08/2010, 15:15
    Parallel Programming
    The lab that accompanies the material titled "Collective Communication I"
    Go to contribution page
  10. Michaela Lechner (PDC - Center for High-Performance Computing)
    25/08/2010, 08:30
    Parallel Programming
    During this question and answer period, we will discuss questions that came up from yesterday's lectures and labs on MPI Point-to-Point Communication II and Collective Communication.
    Go to contribution page
  11. Olav Vahtras (PDC - Center for High-Performance Computing)
    26/08/2010, 09:15
    Parallel Programming
    When OpenMP is used in conjunction with the Message Passing Interface (MPI), the result can provide a second level of parallelism which has the possibility to gain the greater efficiency on clusters of SMP nodes. In this lecture, we will show how to do such hybird programming by using examples.
    Go to contribution page
  12. Olav Vahtras (PDC - Center for High-Performance Computing)
    26/08/2010, 10:15
    Parallel Programming
    A continuation of the lecture on MPI + OpenMP (i.e., hybrid) programming
    Go to contribution page
  13. Olav Vahtras (PDC - Center for High-Performance Computing)
    26/08/2010, 13:15
    Parallel Programming
    In this lab exercise, you will parallelize some simple algorithms using shared memory programming (OpenMP) and distributed memory programming (MPI) simultaneously.
    Go to contribution page
  14. Olav Vahtras (PDC - Center for High-Performance Computing)
    27/08/2010, 08:30
    Parallel Programming
    During this question and answer period, we will discuss questions that came up from yesterday's lectures and labs on MPI + OpenMP (i.e., hybrid) programming.
    Go to contribution page
  15. Olav Vahtras (PDC - Center for High-Performance Computing)
    27/08/2010, 09:15
    Parallel Programming
    A virtual topology is a mechanism for naming the processes in an MPI communicator in a way that fits the communication pattern better. In this lecture we cover the basic concept behind virtual topologies, the main MPI calls used to achieve it, and some examples.
    Go to contribution page
  16. Olav Vahtras (PDC - Center for High-Performance Computing)
    27/08/2010, 10:15
    Parallel Programming
    A continuation of the lecture on MPI virtual topologies
    Go to contribution page
Building timetable...