-
Lilit Axner (PDC - Center for High-Performance Computing)23/08/2010, 09:15Parallel ProgrammingThis lecture gives you an overview of MPI functionality and discuss the value of having a standard message passing library. It provides a tailored tour of the basic concepts and mechanisms that will be of most value to you as beginning MPI programmers. This includes gaining a familiarity with some of the most commonly used MPI calls, the way in which MPI is initialized and...Go to contribution page
-
Lilit Axner (PDC - Center for High-Performance Computing)23/08/2010, 10:15Parallel ProgrammingIn point to point communication, one process sends a message and a second process receives it. This is in contrast to collective communication routines, in which a pattern of communication is established amongst a group of processes. This lecture will cover the different types of send and receive routines available for point to point communication.Go to contribution page
-
Lilit Axner (PDC - Center for High-Performance Computing)23/08/2010, 13:15Parallel ProgrammingLab exercises to accompany the material on the basics of MPI programmingGo to contribution page
-
Lilit Axner (PDC - Center for High-Performance Computing)23/08/2010, 15:15Parallel ProgrammingLab exercises to accompany the material titled "Point-to- Point Communication I"Go to contribution page
-
Lilit Axner (PDC - Center for High-Performance Computing)24/08/2010, 08:30Parallel ProgrammingDuring this question and answer period, we will discuss questions that came up from yesterday's lectures and labs on MPI Basics and Point-to-Point Communication.Go to contribution page
-
Michaela Lechner (PDC - Center for High-Performance Computing)24/08/2010, 09:15Parallel ProgrammingMPI Point to Point Communication I introduced many of the routines related to sending a message between two processes, with a focus on programming with blocking and non-blocking routines. This lecture will discuss another important area: communication mode. Choosing a communication mode gives the programmer some control over how the system handles the message and can improve...Go to contribution page
-
Michaela Lechner (PDC - Center for High-Performance Computing)24/08/2010, 10:15Parallel ProgrammingThere are certain communication patterns that appear in many different types of applications. Rather than requiring each programmer to code these using point to point communication, MPI provides routines that handle these patterns for you, called collective communication routines. This lecture will survey these routines, the communication pattern established, and their syntax.Go to contribution page
-
Michaela Lechner (PDC - Center for High-Performance Computing)24/08/2010, 13:15Parallel ProgrammingThe lab that accompanies the material titled "Point-to-Point Communication II"Go to contribution page
-
Michaela Lechner (PDC - Center for High-Performance Computing)24/08/2010, 15:15Parallel ProgrammingThe lab that accompanies the material titled "Collective Communication I"Go to contribution page
-
Michaela Lechner (PDC - Center for High-Performance Computing)25/08/2010, 08:30Parallel ProgrammingDuring this question and answer period, we will discuss questions that came up from yesterday's lectures and labs on MPI Point-to-Point Communication II and Collective Communication.Go to contribution page
-
Olav Vahtras (PDC - Center for High-Performance Computing)26/08/2010, 09:15Parallel ProgrammingWhen OpenMP is used in conjunction with the Message Passing Interface (MPI), the result can provide a second level of parallelism which has the possibility to gain the greater efficiency on clusters of SMP nodes. In this lecture, we will show how to do such hybird programming by using examples.Go to contribution page
-
Olav Vahtras (PDC - Center for High-Performance Computing)26/08/2010, 10:15Parallel ProgrammingA continuation of the lecture on MPI + OpenMP (i.e., hybrid) programmingGo to contribution page
-
Olav Vahtras (PDC - Center for High-Performance Computing)26/08/2010, 13:15Parallel ProgrammingIn this lab exercise, you will parallelize some simple algorithms using shared memory programming (OpenMP) and distributed memory programming (MPI) simultaneously.Go to contribution page
-
Olav Vahtras (PDC - Center for High-Performance Computing)27/08/2010, 08:30Parallel ProgrammingDuring this question and answer period, we will discuss questions that came up from yesterday's lectures and labs on MPI + OpenMP (i.e., hybrid) programming.Go to contribution page
-
Olav Vahtras (PDC - Center for High-Performance Computing)27/08/2010, 09:15Parallel ProgrammingA virtual topology is a mechanism for naming the processes in an MPI communicator in a way that fits the communication pattern better. In this lecture we cover the basic concept behind virtual topologies, the main MPI calls used to achieve it, and some examples.Go to contribution page
-
Olav Vahtras (PDC - Center for High-Performance Computing)27/08/2010, 10:15Parallel ProgrammingA continuation of the lecture on MPI virtual topologiesGo to contribution page
Choose timezone
Your profile timezone: