Scientific supercomputing in the spotlight


Almost 250 researchers flocked to the first PASC conference at ETH Zurich to take part in an interdisciplinary meeting on the use of high-performance computing (HPC) in computational science. The success of the conference, and particularly the tremendous turnout, signifies for the organisers that there exists a strong demand for such knowledge exchange in Europe.

June 12, 2014 - by Simone Ulmer

High-performance computing power has increased relentlessly in the last two decades. Twenty years ago, a “supercomputer” could perform only a few billion mathematical operations per second, but today’s class of high-performance computers can manage several quadrillion in the same amount of time. And it is predicted that this figure will increase another thousand-fold by the end of the decade. However, experts agree that further increases in efficiency cannot be achieved solely through hardware: computer algorithms and software also need to be adapted to the latest systems, which is only possible if users and developers (of both hardware and software) work together – a point the roughly 250 domain scientists, mathematicians, computer scientists and hardware manufacturers from Switzerland and abroad who attended the first PASC Conference at ETH Zurich on 2-3 June seem to agree upon.

Sessions on individual disciplines were run in parallel.

A common concern

According to the organisers, the aim of the conference was to bring together scientists from the various research domains where computer-aided research is conducted in order to promote interdisciplinary collaboration and boost the exchange of expertise in HPC. For Olaf Schenk, a professor at the Università della Svizzera italiana and co-organiser of the event, the fact that the first conference attracted so many participants is a clear sign that its theme is an important one for many researchers.

Although PASC is a Swiss project, where scientists and developers of software applications for scientific simulation team up with applied mathematicians, computer scientists, Swiss National Supercomputing Centre (CSCS) and hardware manufacturers, the conference also attracted many international researchers from physics, computer science, mathematics, climate, earth science, materials and life science – the main research domains targeted in the project.

Despite their varied scientific domains, the researchers attending the conference all share a common problem: as supercomputers become increasingly more powerful they look to solve increasingly larger and more complex problems. Supercomputers are consuming ever more power, and if the software applications being used are not efficient, then electricity costs increase accordingly.



The software dilemma

In the past, the power consumption of supercomputers and the cost of electricity were more manageable, and for many researchers the efficiency of software took a backseat. This is not surprising: researchers are naturally more interested in obtaining results than in metrics of efficiency such as “power to solution”, especially when they routinely deal with software applications with millions of lines of code that have been developed over the course of several decades. Some scientists did, however, appreciate that complacency was not a sustainable solution in the long run. One such scientist, Eric Lindahl, a biophysicist from the Department of Biochemistry and Biophysics at Stockholm University, touched upon just how frustrating but ultimately rewarding this kind of software development project can be.

Lindahl leads the development of the GROMACS molecular simulation package, which is used to simulate biomolecular processes such as the opening and closing of ion channels in biomembranes, which facilitates the transportation of ions through cells. The development of the GROMACS code first began in the late 1990s, but a few years ago, Lindahl and his team undertook a major effort to rewrite the code to target modern accelerator architectures, most notably graphics processing units (GPUs). The process was painstaking at times, but their efforts appear to have paid off: today, not only does the code run extremely fast on GPUs, but Lindahl reports that the knowledge and experience gained by his team during the code revision led to much faster and more efficient code for conventional processors (CPUs) as well.

More than 80 posters were presented at the first PASC Conference.

Another Swiss initiative, PASC’s predecessor project, HP2C (the High Performance and High Productivity Platform), focused on improving software to enable simulation engines to work fundamentally more efficiently. PASC’s aim is to solve key problems in scientific disciplines and to develop application-oriented tools. For instance, it is looking to create framework conditions to facilitate the handling of large amounts of data that are generated by the climate, earth sciences and physics communities.

Another six invited talks and a public lecture, given by experts in their respective research domains, provided further input for cross-disciplinary discussions. Parallel sessions dedicated to the individual core disciplines left participants rather spoilt for choice: over sixty talks were presented in total, along with 80 research posters.



“The Arrow of Computational Science”

During the PASC Conference ETH professor Petros Koumoutsakos gave a public lecture on the “Arrow of Computational Science“.

One of the conference highlights was a public lecture “The Arrow of Computational Science” given by Petros Koumoutsakos. The ETH-Zurich professor is convinced that computing is a scientific revolution at a time when the human race finds itself confronted with no shortage of problems that cannot be solved through theoretical considerations alone. Computing can often be used as a springboard for identifying solutions; at the same time, however, the researcher pointed out some weaknesses and areas where there is room for improvement, such as eliminating sources of error.

A coffee break for resting or further discussions.

Apart from the ever-increasing amounts of data being produced, and the appearance of new and sometimes disruptive computer architectures, the major challenge, says the researcher, is the gap between hardware and software. After all, according to Koumoutsakos, less than one tenth of the available performance of supercomputers is readily being exploited today. Koumoutsakos has attempted to plug this gap with his own research, which ultimately won him the Gordon Bell Prize last November. However, we are still a long way from achieving the same improvements in efficiency for real applications in all scientific disciplines where supercomputers are used. The PASC project is helping to achieve this goal.