The workshop organised by Giovanni Aloisio (CMCC), Graham Riley (UNIMAN), Carlos Osuna (METEOSWISS) and Sandro Fiore (CMCC) was hosted by DKRZ with the local support of Dela Spickermann and Florian Ziemen under the supervision of the ESiWACE2 Coordinator Joachim Biercamp. The workshop was funded by the Horizon 2020 project ESiWACE2. Due to the situation with COVID-19, the event was held as a virtual conference with approximately 143 participants mainly from Europe and US, but also from Brazil, India and Israel.

Scientists working in the fields of earth system modeling, machine learning, exascale hardware/computing, and programming models attended the workshop, receiving and exchanging information about the latest developments in those fields, also involving HPC experts.

3 sessions, 17 presentations

The agenda of workshop was organised in three sessions, with a total of 17 talks, focusing on:

Session 1 – Exascale hardware

Session 2 – Programming models and hardware interplay

Session 3 – Machine Learning


Two sessions were organised in the morning (for EU speakers), while the third one was scheduled in the afternoon to allow US speakers to give talks during their local daylight hours.  

All talks were held as a videoconference, whereby the presentation slides as well as the speaker were usually broadcasted live to all participants via screen sharing. Questions for the speakers were collected in online documents during the talk and directed by the chairpersons to the speakers afterwards. This approach had already been adopted for other events (6th ENES HPC Workshop) and proved to be effective for this virtual workshop, too.

Giovanni Aloisio (CMCC) and Graham Riley (UNIMAN) opened the workshop and gave a general overview about the agenda and goals of the workshop.

Exascale hardware


Graham Riley (UNIMAN) chaired the first session, which focused on Exascale hardware.

Thomas Schulthess (CSCS) gave a presentation on “A useful definition of exascale computing for weather and climate modelling”, presenting computational power aspects at large scale, a use case based on COSMO 5.0 & IFS (“the European Model”) at global scale on Piz Daint as well as future exascale goals towards 2022, including data challenges and link with PRACE Tier1 Supercomputing machines.

Jesus Labarta (BSC) presented Future HPC systems made in Europe. This talk aimed at answering the questions, “Which will be the next HPC system? What impact will iit have on our codes? Which is the status of future computing systems in Europe?” The talk discussed the European Processor Initiative (EPI) project and in particular its RISC-V vector accelerator, targeting HPC starting from the performance analysis of weather forecasting HPC codes to show how to get insights and architectural implications that can influence the design of future HPC systems.

Jean-Marc Denis (ATOS/EPI) gave a talk on the European Processor Initiative and the European approach for Exascale ages. This talk discussed the transition from existing homogenous to Exascale-class modular architectures. The consequences on the compute components including the general-purpose processor and the associated accelerators was addressed. Ultimately, all these considerations lead to the guidelines that have ruled the design of the European microprocessor that will empower the European Exascale supercomputers.

Kimmo Koski (CSC/LUMI) presented the LUMI initiative, a EuroHPC pre-exascale system of the North, which is a joint effort by the European Commission and 31 countries to establish a world-class ecosystem in supercomputing to Europe. The aim is to install the first three "precursor to exascale" supercomputers in Finland, Italy and Spain (and five petascle systems in various countries). In the talk the LUMI infrastructure and its great value and potential for the research community was discussed.

Estela Suarez (Jülich Supercomputing Centre) gave a talk on Modular Supercomputing Architecture for Exascale. The talk discussed the Modular Supercomputing Architecture developed within the EU-funded DEEP project series. To reach Exascale compute performances at an affordable budget requires increasingly heterogeneous HPC systems, which combine general purpose processing units (CPUs) with acceleration devices (e.g. graphic cards (GPUs) or many-core processors). The proposed Modular architecture aims to orchestrate all these resources at system-level, organising them in compute modules. The goal is to provide cost-effective computing at extreme performance scale, fitting the needs of a wide range of Computational Sciences. The talk described the architecture’s history, its hardware and software elements, and its current and upcoming implementations.

Programming models and hardware inerplay


Carlos Osuna (METEOSWISS) chaired the second session, which focused on Programming models and hardware interplay.

John Goodacre (University of Manchester) gave a talk on “The Euroexa system architecture for exascale”. The talk described the Euroexa project, a H2020 Towards Exascale FETHPC co-design project that is undertaking a holistic view across the entire hardware infrastructure and software stack to reconsider many of the inherited assumptions about how the resources of a computer connect and interact so as to maximise computational efficiency.  This talk discussed how energy is provided to the system, how the electronics and processing subsystems interact, the computational models and its acceleration and interconnect through to what the system does to enable a meaningful reuse of the computational byproduct - heat.

Rupert Ford (STFC) presented a talk about “Developing DSLs in ESiWACE2”. The talk shows the current state of the art in terms of domain specific languages developed for weather and climate models for the exascale era. DSLs are of interest as they provide a separation of concerns between the scientific simulation and the hardware-dependent implementation and optimisations. The talk focuses on the progress made by two DSLs developed and adopted into models by the ESiWACE2 project: PSyclone and the dawn DSL.

Harald Köstler (University of Erlangen-Nuremberg) gave a presentation on “Whole program code generation for Ocean simulation”. The talk presented a short overview of different code generation approaches and the ExaStencils project, that started as a domain-specific language for multigrid solvers on structured grids. Meanwhile, the external DSL ExaSlang is mature enough to express full application models for Ocean simulation. As an example, the talk showed the whole-program generation for the shallow water equations on block-structured grids created from a real-world geometry and using a higher discontinuous Galerkin discretisation. The generated code is highly scalable on current supercomputers and portable to GPU and CPU architectures.

Simon McIntosh-Smith (University of Bristol) gave a talk on “Exascale programming models: beyond “MPI+X”. The talk gave an overview of existing and popular programming models for HPC and made projections about promising programming models for Exascale systems, with heterogeneous architectures, millions of degrees of parallelism, and deep memory hierarchies. In particular it reviews the usage of the most popular MPI+X model, but also gives hints in terms of usage of other solutions, for both fine grain parallelism as well as inter-node parallelism, like Kokkos or PGAS. Some predictions and recommendations for developers of scientific codes aiming for Exascale were also presented.

Daniele Lezzi (BSC) gave a talk on “Programming dynamic workflows in the Exascale Era”. This talk presented the recent activities of the Workflows and Distributed Computing group at BSC to develop a workflow software stack and an additional set of services to enable the convergence of HPC and big data analytics (HPDA) and machine learning in scientific and industrial applications. This framework allows the development of innovative, adaptive and dynamic workflows that efficiently make use of the heterogeneous computing resources and also leverage innovative storage solutions. An application of the framework for a biomolecular dynamics use case was also presented, to describe how the tools allow developers to run huge executions making an efficient use of large supercomputers such as the ones that are to be available in the coming years and thus reaching the pre-exascale era.

Iva Kavcic (UK Met Office) gave a talk on “LFRic and PSyclone: Utilising DSLs for performance portability”. The talk gives an overview of the LFRic new weather and climate model, developed by the UK Met Office and some of the design principles of the dynamical core on semi-structured cubed-sphere. The presentation focuses on the separation of concerns applied to the LFRic model via the PSyclone DSL. It shows how PSyclone enhances the use of distributed memory parallelism via optimisations such as asynchronous-halo-exchange transformations and redundant computations into annexed dogs. The talk gave more examples of the impact of PSyclone on the LFRic performance.

Machine learning


Giovanni Aloisio (CMCC) chaired the third session, which focused on Machine Learning.

Peter Dueben (ECMWF) presented a talk on Machine learning for weather predictions at ECMWF, outlining how machine learning, and in particular deep learning, could help to improve weather predictions in the coming years, and he presented an overview of work on machine learning methods that is ongoing at the European Centre for Medium-Range Weather Forecasts.

Torsten Hoefler (ETH, Zürich) presented a talk on Deep Learning for Post-Processing Ensemble Weather Forecasts, discussing uncertainty quantification in weather forecasts, which typically employs ensemble prediction systems consisting of many perturbed trajectories run in parallel. A mixed prediction and post-processing model based on a subset of the original trajectories was proposed. The model is based on a deep learning approach to account for non-linear relationships that are not captured by current numerical models or other post-processing methods.

Oliver Dunbar (Caltech) gave a talk about efficiently constraining parameter uncertainty in a General Circulation Model using targeted data. Climate prediction relies upon closure models for subgrid scale processes that are unfeasible to resolve globally. These closures feature model parameters, and the talk focused on quantifying the uncertainty of these parameters. A closure for moist convection within an idealised aquaplanet general circulation model (GCM) was considered using a Calibrate -- Emulate -- Sample (CES) philosophy to feasibly perform uncertainty quantification on closure parameters, making use of Gaussian process emulation. 

Pierre Gentine (Columbia University) presented a talk titled “Hybrid modeling: best of both worlds?”. The talk showed how machine learning can be exploited to include in the ML models physical constraints such as the conservation of mass and energy. A hybridisation of machine learning algorithms (imposing physical knowledge within them) was presented to show how it can help with different issues and offer a promising avenue for climate applications and process understanding.

Claire Monteleoni (Colorado University) gave a presentation on Climate Informatics: Machine Learning for the Study of Climate Change. She gave an overview of climate informatics research developed at Colorado University, focusing on challenges in learning from spatio-temporal data and climate model projections, along with semi- and unsupervised deep learning approaches to studying rare and extreme events, and downscaling temperature and precipitation.

Noah Brenowitz (Vulcan Inc.) presented a talk on the use of Machine-learning of moist physics parameterisations for a climate model using coarse-graining of global cloud-resolving model output. In this talk, attempts to build machine learning parameterisations for use in increasingly complex simulations of the atmosphere were presented. It was highlighted how ML parameterisations can play nicely with fluid-mechanics simulations and on interpreting their behavior. While these efforts are targeted on improving atmospheric models, similar techniques could be applied to sub-grid-scale problems throughout the Earth sciences.
The question time was successful in all the three sessions with a lot of questions for speakers. Questions were collected in three google docs to facilitate the interaction between the speakers and the overall audience, and to manage a larger number of questions than could be posed and answered in the short time dedicated to the Q&A.

Reaching a larger audience by going virtual


With the event held as a virtual meeting instead of a face-to-face workshop, it attracted a larger and broader audience. Scientists lacking the necessary funding to attend on-site events were able to participate, and without the need for travel and accommodation, the carbon footprint of the workshop was significantly reduced.    

The program committee of the ESiWACE2 Virtual Workshop on Emerging Technologies for Weather and Climate Modelling consisted of Giovanni Aloisio (CMCC), Graham Riley (UNIMAN), Carlos Osuna (METEOSWISS) and Sandro Fiore (CMCC).

A second virtual workshop on the same subject in the context of the ESiWACE2 project takes place on 7th October 2022, see the event page.

Further information on the workshop can be found on the event page.