With growing compute power the complexity of simulated models grow which produce more simulation output. Additionally in many areas the amount of measured data to be compared with the simulated models grow due to improved scientific instruments.

Managing these huge amounts of input and output data and the corresponding application workflows has become usually the most crucial part in optimizing today’s HPC clusters and will get even more important over the next years. While input and output to the applications on the HPC systems is growing, the amount of total system memory used by individual applications is approaching sizes which cannot be handled with todays classical checkpoint & restart methods and the corresponding infrastructure.

Archiving of scientific data is widely discussed within scientific communities to enable efficient re-use and reproducibility of research. Despite the predictions in the past, most clients nowadays still use tape for archiving to be cost effective. Using tape in science as an active data archive requires high performance data and tape management tools as data is permanently also read from tape not only written to it. Seamlessly integrated data and workflow management software and corresponding storage infrastructure is required to exploit all available technology tiers from memory down to tape to cope with this new set of challenges of future HPC systems.

This talk will provide insights to IBM HPC technology developed to address these challenges.

 

Slides to this talk