PDC 20th Anniversary and SNIC Interaction

Europe/Stockholm
Tammsvik Konferens och Herrgård

Tammsvik Konferens och Herrgård

197 91 BRO <a href="http://www.tammsvik.se/">Homepage</a>
Description
Come join PDC in celebrating its 20th anniversary!

A two-day symposium will be held to celebrate the occasion. Included in the symposium will be talks by world-leading scientists on high-performance computing in Europe and Sweden, and science done using such high-end computing power. At this symposium we will also inaugurate our new Cray XT6m system. A poster session highlighting accomplishments by users of high-performance computing in Sweden, as part of SNIC Interaction, will complement the event. SNIC users are invited to submit an abstract of their poster using the abstract submission feature of this webpage.


Attendance to the event including lunches and gala dinner is free of charge, accommodation can be booked for 1.535 SEK per person.


We are grateful to AMD for kindly sponsoring this event.
    • 12:00 13:00
      Lunch 1h
    • 13:00 13:30
      Welcome 30m
      Speaker: Gunnar Landgren (KTH)
    • 13:30 14:15
      The Swedish e-Science Research Centre (SeRC) - advancing e-Science in Sweden 45m
      The Swedish e-science Research Centre (SeRC) is formed by the universities in Stockholm and Linköping – KTH, Linköping University (LiU), Stockholm University (SU) and Karolinska Institutet (KI) – around the two largest high- performance computing (HPC) centres in Sweden: PDC at KTH and NSC at LiU. Research at SeRC is focused on the collaboration between tool makers and tool users, and brings together a core of nationally leading IT research teams with expertise in e-Science method development and leading scientists in selected application areas. SeRC will constitute a leading visionary e-Science node with a national scope and strong international ties Substantially increased collaboration between applied and method-oriented groups is needed, and SeRC will provide a platform for this. Our approaches to reach these goals are: 1. Formation of e-Science Communities that connect application groups with relevant core e-Science groups and computer experts at PDC and NSC. 2. Research in core e-Science methods such as distributed resources, database technology, numerical analysis, visualization and interaction, mathematical modeling and parallel algorithms, focusing on problems critical for several e-Science communities. 3. Much closer collaboration between PDC and NSC, and a substantial increase in advanced support staff, which will turn the centers into comprehensive e- Science enablers. SeRC is also taking a national responsibility in the e-Science area in terms of hosting a large part of the Swedish e-Science infrastructure through PDC and NSC. Already today these two high performance computing centers take the nationally leading role, which will be further developed within SeRC beyond the hardware aspect of e-Infrastructure.
      Speaker: Prof. Dan Henningson (KTH Mechanics)
      Slides
    • 14:15 15:00
      Multiscale Simulations using particles 45m
      This presentation will discuss a particle-based framework for multiscale simulations of transport processes in problems ranging from fluid mechanics to biology. The talk will highlight recent advances in the development of wavelet based adaptivity for particle methods, the coupling of atomistic and continuum descriptions and discuss the implementation of these methods in massively parallel computer architectures.
      Speaker: Petros Koumoutsakos (ETHZ)
    • 15:00 15:30
      Coffee Break 30m
    • 15:30 16:15
      HP2C.ch – Simulations software development for next generation supercomputers 45m
      In fall 2009 numerous applications sustained a petaflop/s under productions conditions on “Jaguar”, the Cray XT5 system at Oak Ridge National Laboratory, and delivering impactful scientific results. Sustained petaflop/s computing thus came about several years before the first “planned sustained petaflop/s systems” – the IBM Blue Waters system at NCSA – would come online in 2011. The experiences and lessons learned from the application developments that led to this surprising early success of petaflop/s computing will be discussed in the light of ongoing discussions around the roadmap towards exascale computing. Specifically, we will discuss the Swiss platform for High-Performance and High-Productivity Computing (hp2c.ch<http://hp2c.ch>), a program that could serve as a model for European contributions toward exascale computing.
      Speaker: Thomas Schulthess (Swiss National Supercomputing Centre (CSCS) at Manno)
      Slides
    • 16:15 16:45
      SNIC Interaction: Gaining Access to European Resources (DEISA, PRACE, HPC-Europe) 30m
      Speakers: Antti Pursula (CSC), Lilit Axner (KTH)
      Slides
      • HPC-Europa2: Research visits with access to HPC resources 15m
        PC-Europa2 is an European project providing access to powerful supercomputers and funding research visits within Europe. This presentation will introduce the project and the opportunities it provides for all researchers utilizing computing resources in their work. Practical information on applying and getting access will be given.
      • DEISA and PRACE 15m
        DEISA, the Distributed European Infrastructure for Supercomputing Applications, as well as PRACE, the Partnership for Advanced Computing in Europe, PRACE, are pan-European Research Infrastructures for High Performance Computing (HPC). PRACE forms the top level of the European HPC ecosystem, while DEISA is a consortium of leading national Supercomputing centres that aims at fostering the pan-European world-leading computational science research. PRACE provides Europe with world-class systems for world-class science and strengthens Europe’s scientific and industrial competitiveness. PRACE will maintain a pan-European HPC service consisting of up to six top of the line leadership systems (Tier-0) well integrated into the European HPC ecosystem. Each system will provide computing power of several Petaflop/s (one quadrillion operations per second) in midterm. In my talk I will give a short introduction of these two projects and will highlight the advantage of having access to a variety of supercomputing architectures for different demanding computing purposes provided previously by DEISA and currently by PRACE. I will also announce the upcoming project calls and explain the application procedures. For more information please refer to http://www.deisa.eu/ and http://www.prace-project.eu/
    • 16:45 18:00
      SNIC Interaction: Poster Session
      • 16:45
        Using SNIC resources to explore the flow physics around a simplified tractor-trailer model 1m
        This poster will present how the use of the SNIC computational resources has been used by the authors to explore the flow physics of a simplified tractor-trailer model. Even though the geometry is rather simple, the flow around a simplified tractor-trailer model is very complex. It is characterized by large, unsteady vortices that are shed from the geometry. The interaction of these vortices with each other and the geometry will determine important aerodynamic properties such as the drag and lift of the vehicle. The SNIC resources has enabled us to use the computational costly but more accurate unsteady Large Eddy Simulation (LES) approach when solving the governing Navier-Stokes equation. This enables us to better analyze and understand the unsteady nature of the flow. The poster will show varius results from our research.
        Speaker: Mr Jan Östh (Chalmers University of Technology)
      • 16:46
        Flow simulation and optimization using SNIC resources 1m
        At present the landing gear is becoming the dominant source for noise in large airplanes. Simulations of airflow around a simplified landing gear of an airplane have been performed in order to determine pressure fluctuations around the landing gear. Different simulations methods have been compared which gives us information on how to evaluate landing gear performance without having to perform expensive wind tunnel experiments. Simulations were performed using PANS and LES. Total CPU hours were around 98000 on the Beda computer cluster. The work was a part of Benchmark problems for Airframe Noise Computations Workshop (BANC-1). The work on an automatic shape optimization process will also be presented. The optimization process is created by connecting three commercial codes in a closed loop. Optimization is a computationally intense process where each design is evaluated. Arrangements have been made to decrease this computation time. Results showing the optimization of the rear end of a simplified Volvo car model with respect to drag and lift will be presented. Simulations were run on the computer cluster Neolith using 18600 CPU hours.
        Speaker: Mr Eysteinn Helgason (Chalmers University of Technology)
      • 16:47
        Massively Parallel Large Scale Automated Adaptive Finite Element CFD 1m
        We present recent advances on automated adaptive finite element CFD for massively parallel architectures, illustrated by high Reynolds number flow simulations and adaptive computation of aeroacoustic sources for rudimentary landing gear. Our implementation shows excellent performance and scalability for a wide range of different architectures, and is freely available as part of the open source project FEniCS.
        Speaker: Mr Niclas Jansson (KTH CSC/NA)
      • 16:48
        SeRC Electronic Structure Community 1m
        This poster presents some of the research going on in the SeRC electronic structure community.
        Speaker: Prof. Anna Delin (KTH)
      • 16:49
        HPC at KTH Mechanics 1m
        High Performance Computing (HPC) has matured into a well established tool for cutting edge studies in fluid mechanics, both within academic research as well as at the corresponding areas of industry (aeronautics, vehicle, energy etc). Within KTH Mechanics, Computational Fluid Dynamics (CFD) also has a long tradition, dating back more than 25 years. The last few years have been dominated by an increase in computer power and a clear trend towards large-scale parallelism, allowing us to study more complex geometries, which include involved physics, and reaching up to higher Reynolds numbers. In principle, we attempt to solve the well-known Navier-Stokes equations, which are known to be the governing equations for laminar, transitional and turbulent flows. However, due to the non-linearity and chaotic behaviour of these equations (turbulence!), large grids are necessary: We have recently performed simulations with up to 10 billion grid points, and also run our codes on up to 32,000 processors. Most of the computations are performed with parallel in-house codes or open-source codes available through collaboration with other research groups. It turns out that HPC applications in CFD are usually limited by processor speed and communication/network performance for large parallel jobs, which highlights the need for both tightly connected parallel machines, and continuing development of research codes (e.g. hybird parallelisation OpenMP/MPI). A few of the highlights of the research at KTH Mechanics involving HPC resources are shown on the posters. Our activities can roughly be categorised into three areas: Turbulence (including geophysical flows), flow stability and flow control. A) A fully turbulent spatially developing turbulent boundary layer has been studied with fully resolved simulations: The simulated area, which is among the largest studied so far, would cover a section of 20cm x 1 cm on a Airbus A380 wing. As shown in the poster, special focus is laid on extracting the turbulent structures, which might give indications as to understand the intrinsic dynamics of turbulence close to walls. B) Stability of the flat-plate boundary layer flow was studied first without and at a later stage including the leading edge of the plate. So-called optimal disturbances to the laminar flow were computed and analysed enabling the study of the transition process from laminar to turbulent boundary layers. It turns out that the leading edge inclusion is relevant to the receptivity of the boundary-layer flow to external disturbances. C) A fully three-dimensional diffuser was modelled for moderately high Reynolds number where the flow was fully turbulent and three-dimensional separation occurred. In this project, we could for the first time match the simulations with experimental results obtained in a very advanced lab based on MRI-scanners. With the computer results it was possible to visualize the time-dependent three-dimensional phenomena inside the diffuser. Increased understanding of such flow situations is relevant to many areas of science and engineering. These results show that HPC plays an integral part in fluid dyanamics. With that we would like to acknowledge the support and computer time provided by SNIC and PDC, which helped us during all these years to perform this exciting research.
        Speakers: Mr Antonios Monokrousos (KTH Mechanics), Mr Peter Lenaers (KTH Mechanics), Dr Philipp Schlatter (KTH Mechanics)
      • 16:50
        Origin of the Anomalous Piezoelectric Response in Wurtzite ScxAl1-xN Alloys 1m
        Recently Akiyama et al. [1] have discovered a tremendous about 400% increase of the piezoelectric moduli in ScxAl1-xN alloys in reference to wurtzite AlN, around x = 0.5. This is the largest piezoelectric response among the known tetrahedrally boundend semiconductors. Since AlN can be used as a piezoelectric material at temperatures up to 1150 C and it easily can be grown as c-oriented, the AlN-based alloys with such a high response open a route for dramatic increase in overall performance of piezoelectric based devices. Nevertheless, a fundamental understanding of the phenomenon leading to such a dramatic improvement in piezoelectric properties of AlN is absent. It is unclear if the enhanced piezoelectric response in ScxAl1-xN is related to the microstructure or it is an intrinsic effect of the alloying. Our quantum mechanical calculations [2] show that the effect is intrinsic. It comes from the simultenous strong change in the response of the internal atomic coordinates to strain and the pronounced softening of C33 elastic constant. The underlying mechanism is the flattening of the energy landscape due to a competition between the parent wurtzite and the so far experimentally unknown hexagonal phase of these alloys. [1] M. Akiyama et al., Adv. Mater. 21, 593 (2009). [2] F. Tasnadi et al., Phys. Rev. Lett. 104, 137601 (2010).
        Speaker: Ferenc Tasnadi (IFM LiU Theoretical Physics)
      • 16:51
        Multi-scale QM/MM high performance computing design of biological markers 1m
        Hybrid quantum mechanics/molecular mechanics (QM/MM) methods combined with molecular dynamics simulations for conformational sampling provides a unique opportunity to understand action mechanisms of optical and magnetic probes at the microscopic level. These methods are capable to predict optical and magnetic properties of biomarkers in solution and in protein environments with sufficient accuracy in order to enable design and tuning of biomarkers for fluorescence, phosphorescence and electron paramagnetic resonance detection based on first principles. This poster describes state of art QM/MM methods designed for computation of linear and non-linear molecular properties. An implementation in the DALTON molecular program is presented and a future development of these methods are outlined with focus on massive multi-layer parallelization on modern multicore clusters and emerging heterogenous CPU/GPU systems. We also describe potential applications of QM/MM methods in connection with ScalaLife EU FP7 project.
        Speaker: Dr Zilvinas Rinkevicius (Department of Theoretical Chemistry, KTH)
      • 16:52
        Chalmers e-Science Centre - a new initiative 1m
        Chalmers University of Technology have recently created "Chalmers e-Science Centre". The structure, goals and organisation of the new centre will be discussed and collaborative opportunities highlighted.
        Speaker: Dr Pär Strand (Chalmers University of Technology)
    • 19:00 22:30
      Gala Dinner 3h 30m
  • Tuesday, 31 August
    • 09:30 10:15
      SNIC 45m
      Speaker: Sverker Holmgren (SNIC)
      Slides
    • 10:15 10:45
      eSSENCE 30m
      Speaker: Göran Sandberg (Lunarc)
      Slides
    • 10:45 11:15
      Coffee Break 30m
    • 11:15 11:45
      20 Years of PDC 30m
      Speaker: Lennart Johnsson (The University of Houston)
    • 11:45 12:15
      Cray Inauguration 30m
      Speakers: Erwin Laure (PDC), Sverker Holmgren (SNIC), Ulla Thiel (Cray)
      Slides
    • 12:30 13:45
      Lunch 1h 15m
    • 14:05 14:30
      Exascale computing, a new challenge ahead 25m
      A brief overview of the Cray XE6 supercomputer will be given. A highlight of some key challenges along the road of exaflop computing will also be provided.
      Speaker: Mario Mattia (Cray)
      Slides
    • 14:30 15:15
      PRACE - Europe on the Road to Exascale Computing 45m
      Within the last two years a consortium of 20 European countries has prepared the legal and technical prerequisites for the establishment of a leadership-class supercomputing infrastructure in Europe. The consortium named "Partnership for Advanced Computing in Europe" has carried out a preparatory phase project supported by the European Commission. The statutes of the new association, a Belgian "association sans but lucrative", were signed in April 2010 and its inauguration took place in June 2010. So far, four members have committed to provide compute cycles worth € 100 Million each in the 5 years period until 2015. The hosting countries in succession foresee the installation of machines of the highest performance class (Tier-0), providing a diversity of architectures beyond Petaflop/s performance with increasing capability. Access to the infrastructure is provided on the basis of scientific quality through a pan-European peer review system under the guidance of the scientific steering committee (SSC) of PRACE. The SSC is a group of leading European peers from a variety of fields in computational science and engineering. Proposals can be submitted in form of projects or as programs by communities. In May 2010 a first early-access call was issued, and the provision of computer time through PRACE is foreseen commencing in August 2010 by the supercomputer JUGENE of Research Centre Jülich. Regular provision will start in November 2010. PRACE's Tier-0 supercomputing infrastructure will be complemented by national centres (Tier-1) of the PRACE partners. In the tradition of DEISA, the Tier-1 centres will provide limited access to national systems for European groups granted through national peer review and synchronized by PRACE. Under the umbrella of PRACE, a first implementation project will start soon and a second and third one are intended to follow in 2011 and 2013, respectively. The short-term goal is to Petascale facilities, tools, algorithms and applications, the long-term goal is to provide a pathway towards Exascale computing.
      Speaker: Kimmo Koski (CSC)
      Slides
    • 15:15 15:45
      Coffee Break 30m
    • 15:45 17:00
      Panel Discussion: Nordic Supercomputing - A call for increased collaboration? 1h 15m
      Over the past couple of years intense discussions on the future landscape of Nordic computing have taken place and now we see first concrete steps with a high-throughput prototype being jointly commissioned by DCSC, UNINETT Sigma, and SNIC, and CSC planning to build a large scale computer center in northern Finland. These developments together with international developments on EGI, DEISA, and PRACE open a number of questions both from an operational as well as usage view. In this panel we try to discuss questions like - Is there a need for national resources? How much? - How can cross-country user access be organized in a fair way? - Could remote hosting lead to a drain of knowledge? - What systems should be centralized/distributed? - Do we need a "Nordic Supercomputer" or are the PRACE resources sufficient? What will be the role of EGI? - How to position Sweden and the Nordic countries on the European and international landscape? - How should advanced user support be organized? - What will be the impact of energy-reuse techniques? - What are the metrics to be taken into account for a decision? and of course the panel will be happy to discuss any further questions asked by the audience.
      Speakers: Jacko Koster (UNINETT The Norwegian research network), Kimmo Koski (CSC Center for Scientific Computing (Finland)), Lennart Johnsson (PDC/KTH), Sverker Holmgren (SNIC)
    • 17:00 17:15
      Closure 15m