You are here: Home » Overview » All Project Partners

All Project Partners

DKRZ

Deutsches Klimarchenzentrum GmbH

DKRZ, the German Climate Computing Centre, provides state-of-the-art supercomputing and data service infrastructure to the German and the international climate research community. More than 1000 national and international scientific users use the technical infrastructure of DKRZ to generate, process, analyze and visualize of huge amounts of data in the context of climate research.

As important as providing the infrastructure for research, DKRZ delivers general user support and specific support for problems related to scientific computing: here DKRZ benefits from its role as acknowledged HPC expert especially in the field of code optimization and parallelization and participates in many national and international projects.

Role in the project

In ESiWACE, DKRZ coordinats the project in WP5 (Coordination). This includes the scientific and administrative responsibility for the work of the consortium.

Moreover, DKRZ will co-lead WP1 (Governance) and WP4 (Exploitability), work packages, that benefit from the role of DKRZ as interface between scientific and technical communities.

Apart from this, DKRZ will contribute to WP2 (Scalability) and WP3 (Usability). 

Names of the colleagues involved

Dr. Joachim Biercamp (coordinator), Dr. Kerstin Fieg, Chiara Bearzotti, Prof. Thomas Ludwig, Dr. Julian Kunkel, Dr. Kerstin Ronneberger, Dr. Panagiotis Adamidis, Dr. hendryk Bockelmann, Sonja Kempe, Katja Brendt

Relevant infrastructure and services available for climate & weather

DKRZ operates MISTRAL, one of the largest supercomputers in Germany with more than 3000 bullx B700 DLC compute nodes (ca 100.000 cores), a Lustres file system with more than 50 PetaBytes disc storage and 3 PetaFLOPS peak performance (summer 2016).

With respect to data services DKRZ operates the ICSU World Data Center Climate with more than 550 TB fully documented climate data and field-based data access. DKRZ is a non-profit and non-commercial limited company and participates in many national and international projects related to climate modeling.

Website

http://www.dkrz.de

Allinea

ARM Limited has acquired Allinea in December 2016. The takeover has been universal. Allinea Software is the trusted leader in development tools and application performance analytics software for high performance computing (HPC) - and one of the fastest growing companies in the sector. Allinea has offices in the US, UK and in Japan, and a global network of resellers and partners. Leading users of HPC turn to Allinea for extremely scalable, capable and intuitive tools that improve the efficiency and value of HPC investment by reducing development time and increasing application performance. Allinea’s integrated profiling and debugging tools are relied on in fields ranging from climate modeling to astrophysics, and from computational finance to engine design. Its performance analytics software improves the performance and throughput of HPC systems by analyzing the applications that are run.

Role in the project

Allinea will contribute to the enablement of software tools within the ESM workflow framework encourage best practices to enable large scale application development and analysis support end-users to run outstanding simulations to advance numerical weather prediction and gain new insight in climate science.

Names of the colleagues involved

Dr.Patrick Wohlschlegel, Dr. Florent Lebeau

Relevant infrastructure and services available for climate & weather

Allinea provides software development tools (including Allinea DDT, Allinea MAP and Allinea Performance Reports) to help climate and weather experts design and produce the best possible science at small and extreme scale.

Allinea provides state of the art training in application debugging & profiling and endeavors to promote the best professional practices to improve the accuracy and the pace of climate and weather research.

Website

http://www.allinea.com

CERFACS

Thanks to its strong expertise in code coupling and the central role played by the OASIS coupler in the European climate community, CERFACS was heavily involved in the set-up of the IS-ENES1 (2009-2012) and IS-ENES2 (2012-2016) projects and now actively participates in IS-ENES2 as leader and co-leader of 2 work packages and leader of the HPC task force. CERFACS is also involved in several other e-infrastructure and scientific FP7 or H2020 European projects: PIMAVERA, 2015-2019 (WP leader), PREFACE, 2014- 2018 (WP leader); CLIPC, 2014-2018 (participant); SPECS, 2012-2016 (WP leader); EUDAT 2011-2015 (Task leader)

Role in the project

CERFACS is co-leader of WP2 and is mainly involved in tasks 2.1 (Support, training and integration of state-of-the-art community models and tools), 2.2 (Performance analysis and inter-comparisons) and 2.3 (Efficiency enhancement of models and tools). CERFACS is also participating to tasks 2.4 (Preparing for exascale) and in WP1 (tasks 1.1 Engagement and governance, and 1.2. Enhancing community capacity in HPC).

Names of the colleagues involved

Sophie Valcke, Gabriel Jonville, Thierry Morel, Marie-Pierre Moine

Relevant infrastructure and services available for climate & weather

CERFACS provides the OASIS coupler to the community.

Website

http://www.cerfacs.fr

ECMWF

European Centre for Medium-Range Weather Forecasts

The European Centre for Medium-Range Weather Forecasts (ECMWF) is an international organisation supported by 34 European and Mediterranean States. ECMWF's longstanding principal objectives are the development of numerical methods for medium-range weather forecasting, the operational delivery of medium-to-seasonal range weather forecasts for distribution to the meteorological services of the Member States, to lead scientific and technical research directed to the improvement of these forecasts, and the collection and storage of appropriate meteorological data. ECMWF has extensive competence in operating complex global forecasting suites on high-performance computers and in transitioning top-level science from research to operations exploiting innovative approaches in computing science to fulfil the tight runtime and delivery constraints required by Member States. ECMWF has signed the  delegation agreement with the European Commission to operate the Copernicus Atmospheric Monitoring Service and the Copernicus Climate Change Service.

Role in the project

Apart from being the co-coordinator of the ESIWACE project, ECMWF is also coordinating WP2 “Scalability”, which links closely to ECMWF’s own Scalability Programme launched in 2013 that aims at developing the next-generation forecasting system addressing the challenges of future exascale high-performance computing and data management architectures.

ECMWF will support the integration of the OpenIFS (Open Integrated Forecast System) model in the climate community EC-Earth system, and will contribute to detailed performance assessment and code optimization work enhancing the level of concurrency and the overlap of communication and computation. ECMWF will also investigate the information content of ensemble model output to propose ways for significantly reducing data volume produced by long model integrations.

Names of the colleagues involved

Dr. Peter Bauer, Dr. Sami Saarinen, Dr. Glenn Carver, Dr. Tiago Quintino, Dr. Daniel Thiemert

Relevant infrastructure and services available for climate & weather

ECMWF's computer facility includes supercomputers archiving systems and networks. ECMWF's multi-petaflops supercomputer facility is designed for operational resiliency featuring two Cray XC30 systems and independent Cray Sonexion storage systems. The system comprises two independent subsystems located in separate halls. It has separate resilient power and cooling systems to protect against a wide range of possible failures.  Each subsystem consists of 19 Cray XC30 cabinets equipped with Intel Ivy Bridge processors and around 3500 dual-socket compute nodes per system, a number of Cray Development and Login nodes and more than 6 petabytes of Lustre storage with the ability to cross mount the Lustre file systems between the halls.

ECMWF produces operational forecasts, archives and disseminates global model output to member states under tight schedules employing its computing and data handling infrastructure.

ECMWF also operates a large-scale data handling system, in which all ECMWF users can store and retrieve data that is needed to perform weather modelling, research in weather modelling and mining of weather data.

Website

http://www.ecmwf.int

CNRS-IPSL

Centre National de la Recherche Scientifique - Institut Pierre Simon Laplace

The Centre National de la Recherche Scientifique (CNRS) is the main French public research institution under the responsibility of the French Ministry of Education and Research. CNRS acts here in the name of the Institut Pierre Simon Laplace (IPSL), which is a federal institute located in Paris and composed of 9 research laboratories working on global environmental and climate studies. IPSL gathers about thousand scientists and represents more than a third of the French research potential in atmospheric and oceanic sciences. Main laboratories from IPSL involved in ESiWACE are Laboratoire des Sciences du Climat et de l'Environnement, and Laboratoire d’Océanographie et du Climat. One of the main objectives of IPSL is to understand climate variability, both natural and anthropogenic, and future evolution, at global and regional scales.

IPSL's work relies on the development of Earth system models of different complexity (e.g. IPSL-ESM). IPSL is strongly involved in IPCC Working Group 1. CNRS-IPSL coordinates IS-ENES and IS-ENES2, and has also been involved in several European projects such as ENSEMBLES, METAFOR, EMBRACE, CRESCENDO. CNRS-IPSL was a pioneer in developing since the 1980s a numerical model of the global physical ocean taking into account the HPC issues from the very beginning. This led to the NEMO Consortium involving different research/operational oceanography centres in Europe (CNRS through LOCEAN and LGGE, CMCC, INGV, Mercator Ocean, UK Met Office, NOC), which join efforts for the sustainable development of NEMO (www.nemo-ocean.eu). Today, CNRS is, among the consortium partners, the largest contributor in terms of number of experts to the NEMO System Team, including the Scientific Leader and the Project Manager, and leading the NEMO HPC working group. CNRS-IPSL with the Commissariat à l’Energie Atomique et aux Energies Alternatives (CEA) have developed XIOS, a software library dedicated to efficient IO management for climate models.

Role in the project

CNRS-IPSL will lead ESiWACE's WP1 and substantially contribute to WP2 on NEMO and XIOS.

Names of the colleagues involved

Dr. Sylvie Joussaume, Marie-Alice Foujols, Sébastien Denvil, Claire Lévy, Françoise Pinsard, Dr. Sébastien Masson, Dr. Yann Meurdesoif, Arnaud Caubel

Relevant infrastructure and services available for climate & weather

 HPC: The High Performance Computing resources are provided by the French national supercomputing facility GENCI.

Modelling frameworks and tools: The IPSL-Earth System Model, available in different configurations at different resolutions, is in permanent evolution to reflect state-of-the-art numerical climate science. The model used for CMIP5, IPSL-CM5, includes 5 component models: LMDz (atmosphere), NEMO (ocean, oceanic biogeochemistry and sea-ice), ORCHIDEE (continental surfaces and vegetation), and INCA (atmospheric chemistry), coupled through OASIS. Such a system also includes an I/O library (IOIPSL), an assembling and compiling environment (modipsl), an execution environment (libIGCM) and a set of post-processing tools. 80 IPSL-ESM users are registered in IPSL and associates laboratories, whereas about 200 people use one or more components separately. IPSL-CM5 is used in about 50 European projects and more than 550 projects access its IPCC result database.

Support services (currently provided by the project IS-ENES2) for IPSL-ESM consist in the provision of a contact person and in the maintenance of the model description, using the CIM metadata format, on the ENES portal
NEMO (Nucleus for European Modelling of the Ocean) is a state-of-the-art modeling framework for oceanographic research, operational oceanography seasonal forecast and climate studies.
NEMO version 3.4.1 (Jan 2012) includes 5 major components: the blue ocean (ocean dynamics, NEMO-OPA), the white ocean (sea-ice, NEMO-LIM), the green ocean (biogeochemistry, NEMO-TOP), the adaptative mesh refinement software (AGRIF), the assimilation component NEMO_TAM and NEMO_OBS, some reference configurations allowing to set-up and validate the applications, a set of scripts and tools (including pre- and post-processing) to use the system. NEMO allows some of these components to work together or in standalone mode. It is interfaced with the other components of the Earth system (atmosphere, land surfaces, ...) via the OASIS coupler. NEMO is intended to be a portable platform, currently running on a number of computers.

Support services (currently provided by the project IS-ENES2) for NEMO: NEMO is available as a source code, after registration (on the NEMO web site) and free licence agreement. 6 reference configurations are available, with their downloadable input files.

XIOS (XML – IO – SERVER) is a library dedicated to flexible and efficient I/O management of climate model. XIOS manages output diagnostic, history files, performs temporal and spatial post-processing operation (averaging, max/min, instant, etc…).

Data infrastructure: Part of ENES, IPSL contributes to the European joint effort in the set up and maintenance of the Earth System Grid Federation (ESGF).

Support services (currently provided by the project IS-ENES2) for users and data managers on the model data hosted by ESGF include: provision of information on how to search and access data from the internationally-coordinated WCRP model intercomparison experiments (CMIP5, CORDEX), and on how to synchronize local data repositories with data hosted on the ESGF.

Website

http://www.cnrs.fr/

http://www.ipsl.fr/

MPI-M

Max-Planck-Institut für Meteorologie (DE)

The Max Planck Institute for Meteorology (MPI) performs basic research in the interest of the general public. Its mission is to understand the Earth’s changing climate. It comprises three departments (The Atmosphere in the Earth System, The Land in the Earth System, The Ocean in the Earth System) and hosts independent research groups focused on: Fire in the Earth System, Forest Management in the Earth System, Sea Ice in the Earth System, Stratosphere and Climate, Turbulent Mixing Processes in the Earth System. Scientists at MPI investigate what determined the sensitivity of the Earth system to perturbations such as the changing composition of its atmosphere, and work towards establishing the sources and limits of predictability within the Earth system. The MPG develops and analyses sophisticated models of the Earth System which simulate the processes within atmosphere, land, and ocean. Such models have developed into important tools for understanding the behavior of our climate. Models form the basis for international assessments of the climate change. Targeted in-situ measurements and satellite observations complement the model simulations. MPi is committed to informing public and private decision-makers and the general public on questions related to climate and global change. Together with the University of Hamburg, MPI runs an international doctoral programme, the International Max Planck Research School on Earth System Modelling (IMPRS-ESM) to promote high-quality doctoral research into the Earth’s climate system, hosting approximately 50 PhD students per year. MPI is actively involved in the cluster of excellence "Integrated Climate System Analysis and Prediction" (CliSAP), a research and training network whose goal is to bridge the gap between natural sciences, economics and humanities, creating synergies for analysing natural and human- caused climate change and developing scenarios for the future. The MPI is the major shareholder of German Climate Computing Centre (DKRZ GmbH), the coordinator of the ESiWACE project. DKRZ is an outstanding research infrastructure for model- based simulations of global climate change and its regional effects. DRKZ provides tools and the associated services needed to investigate the processes in the climate system, computer power, data management, and guidance to use these tools efficiently.

Role in the project

  • WP1: Networking activity on strategy and governance development
  • WP2: Involvement in performance analysis: Providing code and expertise on code, with focus in I/O and coupling
  • WP3: WP lead in close coop with BSC; Requirements capture, use case definition, programming work, testing, implementation of end-to-end-WF related software packages, including documentation / recommendation white papers; Networking activities on co-design
  • WP5: Assistance to project lead

Names of the colleagues involved

Reinhard Budich, Luis Kornblueh, Karl-Herrmann Wieners

Relevant infrastructure and services available for climate & weather

Apart from the cdos and making available scientific code (ICON, …), none

Website

http://www.mpimet.mpg.de

BSC

Barcelona Supercomputing Center

The Barcelona Supercomputing Center-Centro Nacional de Supercomputación (BSC-CNS, BSC henceforth), created in 2005, has the mission to research, develop and manage information technology in order to facilitate scientific progress not only in computer science but also in a large range of applications. More than 350 people from 40 different countries perform and facilitate research into Computer Sciences, Life Sciences, Earth Sciences and Computational Applications in Science and Engineering at the BSC. The BSC is one of the eight Spanish “Severo Ochoa Centre of Excellence” institutions selected in the first round of this prestigious programme in Spain, as well as one of the four hosting members of the European PRACE Research Infrastructure. The BSC hosts MareNostrum III, a Tier-0 PRACE system currently ranked as the 24th most powerful supercomputer in Europe (57th in the world) with 1 Pflop/s capacity. In addition, the BSC hosts other High-Performance Computing (HPC) resources, among which it is worth mentioning MinoTauro, a hybrid system with GPUs incorporated.

The Earth Sciences Department of the BSC (ES-BSC) was established with the objective of carrying out research in Earth system modelling. The ES-BSC conducts research on air quality, mineral dust and climate modelling and strongly contributes to the scientific and technological advancement in atmospheric and mineral dust modelling. In this sense, the ES-BSC develops and maintains a state-of-the-art mineral dust model: NMMB/BSC-Dust. The excellent results of the group on this field have contributed to the recently creation of the first World Meteorological Organization (WMO) Regional Meteorological Center specialized on Atmospheric Sand and Dust Forecast, the “Barcelona Dust Forecast Center”. In which the NMMB/BSC-Dust model has been selected as the reference mineral dust model. Currently the model provides mineral dust forecasts to the World Meteorological Organization (WMO) Sand and Dust Storm Warning Advisory and Assessment System (SDS-WAS) Northern Africa- Middle East-Europe (NAMEE) Regional Centre that is managed by a consortium between the Spanish Weather Service (AEMET) and BSC. Furthermore, BSC and UNEP are collaborating in the development and implementation of the WMO SDS-WAS West Asia Regional Centre in which the NMMB/BSC-Dust is designed to perform mineral dust forecast simulations. ES-BSC also undertakes research on the development and assessment of dynamical and statistical methods for the prediction of global and regional climate on time scales ranging from a few weeks to several years. The EC-Earth model is used for this purpose. The formulation of the predictions includes the development and implementation of techniques to statistically downscale, calibrate and combine dynamical ensemble and empirical forecasts to satisfy specific user needs in the framework of the development of a climate service.

The high performance capabilities of MareNostrum III and the close collaboration with the Computer Sciences department allow efficiently increasing the spatial and temporal resolution of atmospheric modelling systems, in order to improve our knowledge on dynamic patterns of air pollutants in complex terrains and interactions and feedbacks of physico-chemical processes occurring in the atmosphere, as well as to push the boundaries of global climate prediction. To this, it should be added the increasing collaboration between the different BSC departments on the rapidly growing field of Big Data. Therefore, BSC offers a unique infrastructure to carry out the range of Earth system simulations on which the ES-BSC is a reference worldwide.

Role in the project

BSC’s contribution to ESiWACE is concentrated in WPs 1, 2 and 3. The centre has a vast experience in usability issue as the group not only uses the BSC infrastructure but also relies on several other machines. The department has managed to deploy climate models in platforms distributed around the globe like Archer (Edinburgh), Jaguar (USA), Lindgren (Sweden) and Curie (France). BSC has already organized a summer school on climate modelling (2nd E2SCMS) and therefore has the knowledge required to successfully performing the task. We will put our expertise coleading WP3 with MPI-M.

Due to the nature of the institution which is partially devoted to computer science, we also bring expertise to the WP2 (Scalability) leading task 2.3. ES-BSC can offer a wide range of services (based on software and hardware) to ESiWACE to analyse and improve performance of Earth system models. Increasing climate model efficiency requires the application of a wide set of tools to analyse and understand the behaviour of these models running in a parallel environment. BSC, through the Tools Group, develops tools like Paraver or Dimemas that can easily help the user of these codes to understand the behaviour of the code and identify possible bottlenecks and hardware related problems of the application. Furthermore, BSC provides the OmpSs programming model and COMPSs Superscalar (both developed in the institution). OmpSs can exploit parallelism through data dependencies or use different devices (GPU’s, accelerators) in a transparent way for the user and COMPSs Superscalar is a programming model that aims to ease the development of applications for distributed infrastructures, such as clusters, grids and clouds.

Relevant infrastructure and services available for climate & weather

BSC-CNS hosts MareNostrum, the most powerful supercomputer in Spain. MareNostrum has a peak performance of 1,1 Petaflops, with 48,896 Intel Sandy Bridge processors in 3,056 nodes, and 84 Xeon Phi 5110P in 42 nodes, with more than 104.6 TB of main memory and 2 PB of GPFS disk storage. This infrastructure will be upgraded during 2016.

BSC-CNS hosts MinoTauro, an NVIDIA GPU cluster with 64 nodes, each one carrying two Intel E5649 (6-Core) processor and two M2090 NVIDIA GPU cards. The system will be extended with 39 more nodes with Intel Haswell processors and 2 nvidia K80 each.

BSC-CNS has a dedicated storage for the Earth Sciences Department of more than 600 TB net space that will be grown to 2 PB in the following two years.

BSC-CNS will also deploy in the coming years a ESGF node.

Website

http://www.bsc.es

Wiki for the Earth Science department at https://earth.bsc.es/wiki/doku.php

Code repository at https://earth.bsc.es/gitlab/

STFC

Science and Technology Facilities Council (UK)

Is the description we used in the proposal still up to date? Please check it here: The Science and Technology Facilities Council (STFC) is one of seven research councils in the UK. Its facilities, instruments, and expertise support an extremely wide range of research at universities, in research councils, and industry. The research council is one of Europe’s largest multi-disciplinary research organisations, and has considerable world-class research expertise in areas ranging from nanostructures to lasers, from particle physics to cosmology, from high performance computing and supercomputing to peta-scale data management.

STFC, in particular, provides services to NERC, the National Environment Research Council, so support climate modelling and climate data management. These services include data services for CEDA, the Centre for Environmental Data Archival, and the JASMIN high performance computing cluster, which is backed by the largest Panasas storage system in the world (or at least the largest which is not secret.) The services also include the tapestore, which holds climate data for NERC and both archives the data and manages access for researchers both within STFC and NERC and in the wider research community. As an example, consider the Fifth Assessment Report (AR5) from the Intergovernmental Panel on Climate Change (IPCC): very large data volumes and rigorous analysis are required to develop trustworthy results, and SCD’s science infrastructure for climate research holds about two thirds of all the data used for the report.

As a partner in this project, STFC’s work is split across the Scientific Computing Department (SCD) which provides the expertise in data storage and processing, and the RAL Space department which will provide the expertise in climate modelling and the data formats used.

Role in the project

STFC’s role in the project is twofold: first to investigate and optimise metadata management for climate data files, in collaboration with the ESiWACE partners and the HDF group, and secondly to investigate the future of data storage for climate data. Currently STFC runs a large tape-backed datastore and metadata databases; data services are provided to, for example, the Natural Environment Research Council (www.nerc.ac.uk) via the Centre for Environmental Data Archival (www.ceda.ac.uk).

Names of the colleagues involved

Prof. Bryan Lawrence (National Centre for Atmospheric Science and STFC RAL Space), Dr Jens Jensen (STFC Scientific Computing Department), Dr Martin Juckes (STFC RAL Space), Brian Davies (STFC Scientific Computing Department)

Relevant infrastructure and services available for climate & weather

"STFC runs the JASMIN facility for climate research. As of Jan. 16, JASMIN comprised 4,000 CPU cores interconnected with 10Gb/s networks providing 8 microsecond latency MPI. Data storage is provided by the largest single site Panasas storage system in the world: with 16PB capacity and 3 Tb/s bandwidth, tested transferring 250 gigabytes per second. Archive storage is provided by the tapestore with a nominal capacity of 180PB, and currently holding about 7.5PB of climate data."

Websites

MetO

Met Office (UK)

The Met Office (MetO) has been operating as a Trading Fund since 1996, originally as an Executive Agency of the UK Ministry of Defence (MoD).  As part of a Machinery of Government change in July 2011 MetO became a Trading Fund within the Department for Business, Innovation and Skills (BIS). As the UK’s national meteorological service, it provides a range of products and services to a large number of public and private sector organisations. It also represents the UK within the World Meteorological Organisation (WMO) and plays a prominent role in international meteorology. 

MetO  is one of the world's leading providers of environmental and weather-related services.  It delivers proven weather related services for many different types of industry on a twenty-four hour basis.  Many of these services are time critical. MetO is involved in many areas of research and development in the fields of atmospheric and oceanic sciences and observations.  Its research and development activities aim to improve the accuracy of our weather forecast services and the efficiency with which they can be produced. This enables its customers to benefit from the progressive international advancement of weather forecasting techniques.

 MetO provides the Met Office Hadley Centre Climate Programme which is supported by the Department of Energy and Climate Change (DECC), and the Department for Environment, Food and Rural Affairs (Defra). Their investment provides the core science on which Government can make decisions to help the UK become resilient to climate variability and change, benefit from opportunities for growth, and engage in international climate negotiations. For example, research findings from the programme help ensure cost-effective deployment of renewable energy, and a resilient future for the nation's infrastructure. To achieve this, the Hadley Centre needs a large production facility to run complex multi-model integrations and ensembles of integrations as well as a resource for research and development. These models can run over periods of months and are time critical to meet deadlines for the customer and for the International Panel for Climate Change (IPCC) producing significant output that needs analysis over long periods of time.

MetO has a long experience in developing successful software infrastructures to support both Weather and Climate scientists and models including archive systems, user interfaces, build and configuration management systems.

Role in the project

In Task 2.2. we will contribute the benchmarking of IO servers and coupling technologies in the context of Met Office models and on Met Office HPC systems.

In Task 3.3. we will lead on Task 3.3.1 to 3.3.3 on the development and support for the Cylc meta-scheduler as defined in the grant agreement.

Names of the colleagues involved

Mick Carter, Dave Matthews, Mike Hobson, Members of Dave Matthew’s team depending on specific tasks required by the community.

Relevant infrastructure and services available for climate & weather

2x Cray XC 40 with a mixture of Haswell and Broadwell Intel CPU chips with a combined 6212 nodes and 218752 cores and 12 PBytes of Storage

A HPSS based active archive with a tape library system with capacity that will grow to 800 PBytes in 2017 with an 8 Pbyte DDN disk cache provided by SGI.

36 node SGI scientific processing cluster with a 1 PByte high performance DDN file system.

Website

http://www.metoffice.gov.uk/

UREAD

University of Reading

The Department of Meteorology at the University of Reading (UREAD) is the largest in Europe with over 20 teaching staff, 80 research staff and around 50 PhD students. It has received the highest research rating of 5* in all UK Research Assessment Exercises, indicating an international reputation in all aspects of research. It is a member of Reading’s Walker Institute for Climate System Research, established to promote integrative research across the University. This is reflected in the long-standing presence of staff from the UK Met Office, and the presence of the Natural Environment Research Council (NERC) funded National Centre for Atmospheric Science (NCAS) and the National Centre for Earth Observation (NCEO). The department also works closely with the European Centre for Medium-Range Weather Forecasts (ECMWF), which is located close to the University.

The Department hosts the Computational Modeling Services group of the UK’s National Centre for Atmospheric Science (NCAS-CMS). The NCAS-CMS group at Reading provides modeling support the U.K. academic community for a wide range of climate and earth-system areas on several supercomputer platforms. The scope of support provided is wide ranging, covering areas as diverse as code management; model performance optimization; access to and management of high-performance compute and data services. It currently consists of 12 scientists, several of whom are on grants from NERC and other organizations, with a core of staff funded by NCAS. NCAS-CMS has strong links with the UK Met Office and works closely with them on many aspects of model infrastructure development and deployment. In addition to providing support NCAS-CMS is actively developing software for data processing, analysis, and visualization.

Role in the project

NCAS-CMS will contribute to the design of the specification to define a framework in which several modelling systems can be run seamlessly and will implement the specification for use in the 3rd E2SCMS Summer School.

Names of the colleagues involved

Dr. Grenville Lister, Prof. Pier Luigi Vidale

Relevant infrastructure and services available for climate & weather

NCAS-CMS manage much of the HPC atmospheric modeling infrastructure and resource allocation on the National HPC Service (ARCHER) for the UK academic community, through model installation, support, optimization, provision of end-to-end numerical modeling project services.  We develop and maintain software for use in data processing, data analysis, and data visualization.

NCAS-CMS delivers bi-annual training for the UK Met Office Unified Model and related software tools and utilities.

Website

http://cms.ncas.ac.uk/

SMHI

Sveriges meteorologiska och hydrologiska institut (SE)

SMHI (http://www.smhi.se) is a government agency under the Swedish Ministry of Environment. SMHI offers products and services that provide organisations with important environmental information to support decision-making. The main fields include weather and climate forecasts/projections, industry-specific services, simulations and analyses. SMHI has a strong R&D focus. With climate research involving all of six research sections, including the Rossby Centre that is responsible for the development and application of regional and global climate models.  In  particular  the  Rossby  Centre  is  active  in  the  development  of  EC-Earth,  being responsible for the development and release of the most recent generation, EC-Earth 3. The Rossby Centre also has extensive experience in the development and application of advanced regional climate models.

Role in the project

SMHI will coordinate the efforts to provide community-wide access to the NEMO and EC-Earth models. This will include user support facilities as well as improvements for the scientific software ESiWACE development process. SMHI will contribute to the development of climate model performance metrics and provide performance benchmark results for the EC-Earth model. SMHI's will also assess the performance optimisations for the EC-Earth model. Finally SMHI will be analysing performance enhancements and maintain new developments in forthcoming EC-Earth releases.

More specifically:

WP1, T1.4: Contributor to Roadmap for HPC for ESM, providing input from the EC-Earth development community

WP2, T2.1: Task lead. Coordinating the monitoring of the level of support, training and integration for EC-Earth, NEMO, OASIS, and XIOS. Focal point for the integration of OpenIFS in EC-Earth as well as substantial part of the implementation.

WP2, T2.2: Liaison with IS-ENES2 WP9 related to performance metrics for ESM. Performance analysis for EC-Earth, particularly coupling and I/O.

WP2, T2.3: Coordination of performance optimisation for EC-Earth within the EC-Earth development community. Contribution to actual optimisation implementation and testing.

WP2, T2.4: EC-Earth development community contact point for ECMWF, coordinating the testing and integration of knowledge compression features.

Names of the colleagues involved

Uwe Fladrich, Martin Evaldsson (scientific software developer), Klaus Wyser (climate scientist)

Website

For references about EC-Earth, http://www.ec-earth.org can be used.

ICHEC

National University of Ireland Galway - Irish Centre for High End Computing (IE)

ICHEC, founded in 2005, is Ireland's national high performance computer centre. Its mission is to provide  High-Performance  Computing  (HPC)  resources,  support,  education  and  training  for researchers in third-level institutions and through technology transfer and enablement to support Irish industries large and small to contribute to the development of the Irish economy.

ICHEC works on code optimisation and development of climate and weather codes wirth academia and  public organisations,  in particular  the EC-Earth climate  model, where  it is  a consortium member, and the ’Harmonie’ weather model with Met Éireann in the Hirlam consortium.

ICHEC  has  experience  providing  operational  services  for  Met  Éireann,  the  national  weather service, since 2007. This involves redundant compute and computational scientist support as part of a scientific collaboration, where ICHEC scientists optimise and develop weather and climate codes on next-generation systems. This has recently expanded to include emergency dispersion modelling for the EPA (Environmental Protection Agency) and RPII (radiation), and Dept. of Agriculture (foot and mouth, disease dispersion); Met Éireann and ICHEC have also demonstrated flood forecasting for the Irish Office of Public Works.

ICHEC manages an Earth System Grid Federation (ESGF) portal for climate model data on behalf of the EC-Earth consortium, publishing data on behalf of 14 organisations; we have developed processing workflows and data management systems for this.

Role in the project

ICHEC’s contribution to WP2 is dedicated to WP2 task 3 and 4. ICHEC will work on the integration of GRIB2 format file output to the XIOS (XML I/O Server) library from IPSL. I/O is a major bottleneck for climate and weather codes, and ICHEC have worked with IPSL, integrating the XIOS library into the EC-Earth climate model, adding memory caching for scaling, and GRIB format writing.

ICHEC plans to add two components: use current and planned changes to XIOS by IPSL in memory layout to enable GRIB writing of large files. Currently GRIB output is limited by the need to do an in-memory transpose, requiring all of a dataset to be in memory simultaneously. Planned changes by IPSL in the server layer will make it possible to complete the transpose over multiple nodes, enabling high-resolution GRIB writing. IHEC as original author of the GRIB code will adapt the GRIB model for this.

Secondly, the current GRIB code is limited to lat-long and simple Gaussian outputs. There is no GRIB equivalent to the NetCDF unstructured grid outputs. We will work with partners (in particular ECMWF) to standardize the GRIB output for unstructured grids, and provide GRIB output in unstructured and icosahedral grids.

Names of the colleagues involved

Dr Alastair McKinstry

Relevant infrastructure and services available for climate & weather

  • ICHEC’s primary HPC facility is “Fionn”, a 7680-core SGI ICE X / SGI UV 2000 system (147 Tflop peak) with additional accelerator and high-memory regions. This has 560 TB  formatted Lustre storage, and multiple login nodes, including dedicated nodes for NWP and emergency service use
  • ICHEC is also part of the eINIS collaboration, managing an Earth System Grid Federation (ESGF) node managing climate model data on 1 PB of storage based at DIAS in Dublin.

Website

http://www.ichec.ie/

CMCC

Fondazione Centro Euro-Mediterraneo sui Cambiamenti Climatici (IT)

The Euro-Mediterranean Center on Climate Change (CMCC; http://www.cmcc.it/) is a Foundation, that aims at furthering knowledge in the field of climatic variability, including causes and consequences, through the development of high resolution simulations and impact models. CMCC gathers the know-how from its funding Institutions (Istituto Nazionale di Geofisica e Vulcanologia, Università del Salento, Centro Italiano di Ricerche Aerospaziali, Università Ca’ Foscari Venezia, Fondazione Eni Enrico Mattei, Università di Sassari, Università della Tuscia, Università degli Studi del Sannio), focusing on climate change issues and applications for environmental management. The mission of CMCC is also to encourage and foster collaboration among universities, national and international research institutions, local institutions and industrial sectors. CMCC represents, at the national and international scale, an institutional point of reference for decision makers, public institutions, as well as private and public companies seeking technical-scientific support. CMCC brings together highly qualified experts from different climate research areas in a single unique institution.

The following eight research Divisions work together in an interdisciplinary manner: ASC (Advanced Scientific Computing), CSP (Climate Simulations and Predictions), ECIP (Economic Analysis of Impact and Policy), IAFES (Impacts on Agriculture, Forests and Ecosystems Services), ODA (Ocean modeling and Data Assimilation), OPA (Ocean Predictions and Applications), RAS (Risk Assesment and Adaptation Strategies), REMHI (Regional Models and Hydrogeological Impacts).

Moreover, CMCC hosts in Lecce the Supercomputing Center (180 Tflops of computing power, 1.2 Petabyte on-line storage and 3 PetaBytes Archiving capacity).

The Advanced Scientific Computing (ASC) Division of CMCC carries out Research & Development activities on Computational Sciences applied to Climate Change. In particular, the Division works both on the scalability, the optimization and the parallelization of numerical models for climate change simulations and on the design and implementation of open source solutions addressing efficient access, analysis, and mining of large volumes of scientific data in the climate change domain. In this regard, CMCC provides a big data analytics framework (Ophidia), targeting parallel data analysis on large volumes of scientific/multidimensional data (e.g. multi-terabytes order datasets). The Ocean modeling and Data Assimilation (ODA) Division focuses on the development of numerical models, methods of data assimilation, production of reanalysis and datasets for global marine forecasts and the study of the interactions between the physical and biogeochemical processes of oceans and the cryosphere in climate variability. The Climate Simulations and Predictions (CSP) Division deals with the development of models of the Earth system, the production of climate predictions and the realization of projections of climate change on scales which range from seasonal to centuries.

CMCC is member of the NEMO Consortium contributing to its System Team, whose main goal is, among the others, the optimizations of NEMO on the computers available in the Consortium.

Moreover CMCC participates to the NEMO HPC working group.

CMCC is also member of the European Network for Earth System Modelling (ENES) and partner of the Earth System Grid Federation (ESGF), providing access to 100TB CMIP5 data through its data node deployed at the CMCC Supercomputing Center. Finally, CMCC is partner of several EU FP7-H2020 and other national and international projects, working on the development of high resolution global and regional climate models, their parallel optimization on many core clusters as well as on the development of efficient solutions for data management.

Role in the project

CMCC’s contribution to ESiWACE is dedicated WP1, WP2, WP3 and WP4.

In particular, CMCC will participate to the following activities: enhancing community capacity in HPC; performance analysis and inter-comparisons with the definition of standard efficiency metrics for parallel performance analysis, with benchmarking I/O servers and coupling technologies; efficiency enhancement of models and tools through model optimisation; ESM scheduling; implementation of new Storage Layout for Earth System Data with the investigation of suitable memory and storage back-ends able to support horizontally scalable management of multidimensional scientific data.

Names of the colleagues involved

Dr. Antonio Navarra, Prof. Giovanni Aloisio, Dr. Sandro Fiore, Dr. Silvia Mocavero, Dr. Simona Masina, Dr. Silvio Gualdi

Relevant infrastructure and services available for climate & weather

Infrastructure: The HPC infrastructure managed at the CMCC Supercomputing Centre is composed of a 960 cores IBM Power6 cluster (peak performance 18TFlops) and a 8000 cores Intel Xeon Sandy Bridge (peak performance 160TFlops). Part of this infrastructure will be used for running data management/analysis services and training activities.

Datasets: CMCC publishes about 100TB of climate simulations datasets in the CMIP5 federated data archive related CMCC models.

Software: CMCC provides the Ophidia software, a cross-domain big data analytics framework for the analysis of scientific, multi-dimensional datasets. This framework exploits a declarative, server side approach with parallel data analytics operators.

Website

http://www.cmcc.it/

DWD

Deutscher Wetterdienst (DE)

The Deutscher Wetterdienst (DWD), which was founded in 1952, is as National Meteorological Service of the Federal Republic of Germany responsible for providing services for the protection of life and property in the form of weather and climate information. This is the core task of the DWD and includes the meteorological safeguarding of aviation and marine shipping and the warning of meteorological events that could endanger public safety and order. The DWD, however, also has other important tasks such as the provision of services to Federal and Regional governmental authorities, and the institutions administering justice, as well as the fulfilment of international commitments entered into by the Federal Republic of Germany. The DWD thus co-ordinates the meteorological interests of Germany on a national level in close agreement with the Federal Government and represents the Government in intergovernmental and international organisations as, for example the World Meteorological Organization (WMO). Currently DWD has a total staff of about 2300 employees at more than 130 locations all over Germany. DWD’s spectrum of activity is very wide and comprises of:

  • Weather observation and forecasting around the clock,
  • Climate Monitoring and modelling at local, regional and global scale,
  • Development of precautionary measures to avoid weather-related disasters and to provide support for disaster control
  • Advice and information on meteorology and climatology to customers,
  • National and international co-operation in meteorological and climatological activities,
  • Outlooks on possible future climatic conditions at local, regional and global scale,
  • Research and development.

Role in the project

Participation in workshop organizations and benchmarks.

Names of the colleagues involved

Dr. Florian Prill

Relevant infrastructure and services available for climate & weather

A redundantly installed HPC system, consisting of a Cray XC40 and a Megware Miriquid Linux Cluster with joint batch system and global filesystems, is used for the computationally intensive numerical weather prediction and modeling as well as for data processing.

The Cray XC40 systems (peak performance: 2x560.3 TeraFlops) are provided for the time critical weather forecast and non-time critical meteorological research and development, respectively. The main applications employed on these systems are the massively parallel forecast models COSMO (local and regional area model, ensemble prediction) and ICON (global model) based on Fortran90 and MPI/OpenMP. The Megware MiriQuid systems are available for the supervision of the complete numerical weather prediction operations and for common software development tasks.

There is access to 3990 TiB global file system disk space on all systems.

Website

www.dwd.de

Seagate

Seagate Systems UK Limited (UK)

Seagate is the world’s leading provider of Data Storage devices, equipment and services. The organisation is a worldwide multi-national registered in Ireland (Seagate Technology plc) with more than 56,000 employees; the division of the organisation responsible for this project is Seagate Systems UK Ltd. Seagate operates two primary divisions within its corporate operations, Seagate Technology develops and produces data storage devices including disk drives, solid state drives and solid state storage for use in applications from consumer to extreme performance HPC, a large facility is located in Northern Ireland. The newer division, Seagate Systems has been created following the acquisition of Xyratex Technology in April 2014, combining this organisation with Evault, Dot Hill (more recently), and other internal Seagate systems activities to create a high capability storage systems supply organisation.

A key product line acquired from Xyratex and continuing with Seagate is the ClusterStor range of products, these are fully engineered data storage systems with all hardware, file systems software and system management provided. Systems are provided through our OEM or business partnerships. These systems support some of the world’s most powerful supercomputers.

The systems are installed or planned in a number of installations in Europe including the ones in Met Office UK, EPCC, DKRZ and ECMWF with capacities of up to 45 Petabytes and >1.4 TB/s performance in future deployments.

The Seagate Systems (ex Xyratex) group has around 500 engineers employed in creating Hardware and supporting Software for the Enterprise and High Performance Computing applications. Seagate owns the Lustre trademark and several of their engineers were involved in the original Lustre architecture and design. Within Seagate Systems (UK) the Emerging Technology Group manages collaborative research activities within Europe and will work in concert with development engineering groups based in UK.

Role in the project

Seagate is happy to take part in ESiWACE as we see significant opportunity for business growth based around its success.

Seagate’s skills have been harnessed to create a next generation object storage technology with capabilities well beyond any similar solutions on the market.  Seagate will provide instances of this storage technology and develop native support software that enables the NETCDF and GRiB data formats to be efficiently stored and accessed, providing multiple ‘views’ of the stored data. Seagate will also contribute its deep skills and knowledge of current and future data storage technologies to assist the study of optimized data management by the community.

Seagate Systems is a key supplier of data storage systems (in partnership with Other Equipment Manufacturers) with installations of their equipment in a number of sites of partners within this proposal. We are keen to work much more closely, understanding the user needs and specific opportunities to create or tune systems to maximize the effectiveness of our systems in these user environments.

Names of the colleagues involved

Andy Nimmo (male): Andy is a Principal Engineer in Systems Design and Systems Integration for Seagate. Andy holds a BEng (Hons) in Software Engineering and joined Seagate from Adaptive Computing in January 2014. He has 10 years’ experience working in both private and public sectors of the HPC sector and has extensive experience from systems administrator level all the way up to system architecture, consultancy and security. After initially working as a QA Engineer focusing around networking and kernel comms on the ASCI Q project in 2003 he spent some time as a senior software engineer before moving to system management and workload scheduling back in HPC space. Since joining Seagate Andy has been chiefly involved with the next-generation High Availability project but more recently has been tasked with being in charge of systems integration for Seagate's next-gen systems product and is heavily involved with both scoping and the architecture of many aspects of this project.

Dr. Sai Narasimhamurthy (male): Sai is currently Staff Engineer, Seagate Research (formerly Lead Researcher, Emerging Tech, Xyratex) working on Research and Development for next generation storage systems (2010-). He has also actively led and contributed to many European led HPC and Cloud research initiatives on behalf of Xyratex (2010-).   Previously (2005 - 2009) , Sai was CTO and Co-founder at 4Blox, inc, a venture capital backed storage infrastructure software company in California addressing IP SAN (Storage Area Network) performance issues as a software only solution. During the course of his doctoral dissertation at Arizona State University (2001-2005), Sai has worked on IP SAN protocol issues from the early days of iSCSI (2001). Sai also worked with Intel R&D and was a contributing participant in the first stages of the RDMA consortium (put together by IBM, Cisco and Intel) for IP Storage and 10GbE (2002). Earlier in his career, Sai worked as Systems Engineer with Nortel Networks through Wipro, India focusing on Broadband Networking solutions (2000-2001). 

Malcolm Muggeridge (male):  Malcolm is Sr Dir Engineering responsible for collaborative research at Seagate Systems UK. He joined Seagate through its acquisition of Xyratex in 2014 and was with Xyratex at its creation as a management buyout from IBM in 1994.

Malcolm has more than 38 years’ experience through his employment with IBM and Xyratex in the Technology, manufacturing, quality and reliability of Disk drives and Networked data storage systems and in recent years in HPC data storage, architecting and managing designs and new technologies across many products. More recently he has been focused on Strategic Innovation and Business development, Research & Technology. He is a steering board member of the ETP4HPC defining research objectives for future within Euorpe and is active in the Partnership board of the cPPP on HPC. He is a member of the UK eInfrastructure board with Special interest in HPC. Malcolm has a B.Eng degree in Electronics from Liverpool University.

Dr Nikita Danilov (male): Nikita Danilov is a Consultant Software Architect at Seagate. His work on storage started in 2001, when he joined Namesys to develop the reiserfs file system for Linux. Since 2004 he worked on Lustre in ClusterFS, later acquired by Sun. In 2009 he followed the original Lustre architect—Peter Braam—to the latter's new company Clusterstor to design and implement an exascale storage system, this technology was acquired by Xyratex and forms the basis of the NEXT system. He received a PhD in mathematical cybernetics from Moscow Institute of Physics and Technology.

Giuseppe Congiu (male): Giuseppe Congiu is a Research Engineer (formerly Research Software Engineer at Xyratex) working for Seagate on collaborative European projects. Giuseppe has worked at the Dynamic Exascale Entry Platform – Extended Reach (DEEP-ER) project since 2013, studying and developing parallel I/O solutions for the DEEP-ER I/O stack. He joined Xyratex in 2011 as Marie Curie ITN fellow. Previously, he has been working with IBM and CRS4 (2009-2010) at the development of a computer diagnostic tool for medical images analysis and classification. Giuseppe has a Bsc and Msc in Electrical and Electronic Engineering from University of Cagliari (IT).    

Relevant infrastructure and services available for climate & weather

Seagate Systems has within its development operations some medium scale storage systems linked  to  small  scale  computational  capabilities  for  the  evaluation  and  test  of  new  storage hardware and software. For this project this facility will be utilised to explore the characteristics of IO with new storage techniques.

Website

http://www.seagate.com

BULL/ATOS

BULL/ATOS (FR)

About Bull, Atos technologies for the digital transformation: Bull is the Atos brand for its technology products and software, which are today distributed in over 50 countries worldwide. With a rich heritage of over 80 years of technological innovation, 2000 patents and a 700 strong R&D team supported by the Atos Scientific Community, it offers products and value-added software to assist clients in their digital transformation, specifically in the areas of Big Data and Cybersecurity. 

Bull is the European leader in HPC and its products include bullx, the energy-efficient supercomputer; bullion, one of the most powerful x86 servers in the world developed to meet the challenges of Big Data;  Evidian, the software security solutions for identity and access management; Trustway, the hardware security module  and Hoox, the ultra-secure smartphone. Bull is part of Atos.

About Atos: Atos SE (Societas Europaea) is a leader in digital services with revenue of €10 billion and 86,000 employees in 66 countries. Serving a global client base, the Group provides Consulting & Systems Integration services, Managed Services & BPO, Cloud operations, Big Data & Security solutions, as well as transactional services through Worldline, the European leader in the payments and transactional services industry. With its deep technology expertise and industry knowledge, the Group works with clients across different business sectors: Defence, Financial Services, Health, Manufacturing, Media & Utilities, Public Sector, Retail, Telecommunications and Transportation.

Role in the project

BULL will take part in WP1 and WP2. The people involved in this project are located in the CEPP (Center for Excellence in parallel Programming). This center is based in Grenoble and its activity consists in porting, profiling and optimizing applications for their efficient use of a parallel computers. Their everyday duty requires a high expertise in the computer architectures, the trends in their evolutions, and the impact on the software. Today, the constraints on the hardware are no longer transparent for the software. To get benefits from the coming architecture, the software has to be deeply studied and modified. The notion of co-design, without being strictly mandatory becomes highly necessary. Beside their high skills in parallel programming, most of the CEPP experts have a Ph.D. in a scientific domain: molecular dynamic, chemistry, oceanography, fluid dynamics, astrophysics, … Thanks to these two scientific pillars (HPC and science), the experts in the CEPP are able to understand the behavior of the application and also the goals and needs of the science. The experts who will be involved in this project will be selected thanks to these two background. It is clear that people having oceanography background would bring more benefits to the project, but, some others experts may have a specific knowledge that would be necessary (co-processors, IO, network, …). Thus, this will not be one expert who will implement the work, but probably several different according to the needs. However, an overall envelope can be agreed and the dedicated expert will be the best possible one. Atos is focused on business technology that powers progress and helps organizations to create their firm of the future. The Group is the Worldwide Information Technology Partner for the Olympic & Paralympic Games and is listed on the Euronext Paris market. Atos operates under the brands Atos, Bull, Canopy, Worldline, Atos Consulting and Atos Worldgrid.

Names of the colleagues involved

Dr Xavier Vigourou, Dr Cyril Mazauric Franck Vigilan, Enguerrand Petit

Relevant infrastructure and services available for climate & weather

Different supercomputers are part of the CEPP; they are selected according to the different needs: reproducibility, ability to be modified. In the CHANCE context, the experiments will be done on targeted hardware provided by the project. CEPP is used to integrate, modify and give access to the hardware, they can host it a reliable infrastructure (login nodes, storage, …).

For instance, we have pure CPU nodes, nodes with latest GPU and accelerators. Besides, in terms of interconnect, storage, software stack; we are able to build configurations with a lot of flexibility. This will bring to the project the ability to rely on an flexible infrastructure administrated by professional sysadmin.

The size of the machines evolves continuously, but, there are generally, around one hundred of

CEPP nodes up and running.

Website

http://www.atos.net

http://www.bull.com

UKRI

UK Research and Innovation (UK)

UKRI is the UK's national funding agency investing in science and research. It is supported by seven research councils and public bodies, one of which is the Science and Technology Facilities Research Council (STFC). STFC runs two, national laboratories which provide supercomputing and data-archival facilities for users nationwide. In addition to these facilities, STFC employs around 150 computational scientists who work to develop and optimise the full range of scientific applications as well as providing training in software engineering and optimisation.

 

Role in the project

In ESiWACE2, STFC will co-lead WP2 (Scalability) and will contribute to Work Packages 4, 6 and 7.

 

Names of the colleagues involved

Rupert Ford, Andrew Porter and Neil Massey.
Relevant infrastructure and services for climate and weather

STFC's Hartree centre operates "Scafell Pike", an Atos Sequana X1000 system consisting of 864 dual Intel Xeon nodes and 840 Xeon Phi nodes with a Mellanox EDR interconnect.

The STFC Rutherford Appleton laboratory hosts the JASMIN "super-data-cluster" on behalf of UKRI's Natural Environment Research Council. This machine supports the data-analysis requirements of the UK and European climate and earth-system modelling community. It consists of over 44 Petabytes of fast storage, co-located with computing facilities (11,500 cores) for data-analysis, with dedicated light paths to various key facilities and institutes within the UK.

ETH Zurich / CSCS

Eidgenösische Technische Hochschule Zürich with it's Supercomputing Centre (Centro Svizzero di Calcolo Scientifico), CH

Founded in 1991, CSCS, the Swiss National Supercomputing Centre, develops and provides the key supercomputing capabilities required to solve important problems to science and/or society. The centre enables world-class research with a scientific user lab that is available to domestic and international researchers through a transparent, peer-reviewed allocation process. CSCS' resources are open to academia, and are available as well to users from industry and the business sector. The centre is operated by ETH Zurich and is located in Lugano with additional offices in Zurich.

Role in the project

ETH Zurich / CSCS is involved in WP2, WP3 and WP6, providing expertise and knowledge transfer in container technology. ETH Zurich / CSCS will assist other ESiWACE2 partners to 'containerise' their models, in order to allow seamless porting to a wide variety of platforms. To this end, ETH Zurich / CSCS will co-organize education events such as the WP6 Training on Docker containerisation in order to first get developers up to speed on containers. Also, the WP2 container hackathon, where teams will containerise their models using the ETH Zurich / CSCS computing infrastructure. In addition, ETH Zurich / CSCS will organize a course on advanced software-engineering skills using C++, specifically targeted for HPC.

Names of the colleagues involved

Dr. Lucas Benedicic, Dr. William Sawyer, Katarzyna Pawlikowska, Fourth Person TBD

Relevant infrastructure and services available for climate & weather

The centre operates the very latest supercomputers and works with the world’s leading computing centers and hardware manufacturers. This enables the centre to be the driving force behind innovation in computational research in Switzerland: the very latest computer architecture helps to ensure that users’ codes run quickly so they can focus more on their scientific results.

Among these computing resources, Piz Daint -- named after a prominent peak in Grisons that overlooks the Fuorn pass -- is the flagship system for national HPC Service. It is a hybrid Cray XC40/XC50 system and contains 5320 nodes, each with an Intel Xeon E5-2690 processor and an NVIDIA P100 GPU, as well as 1813 nodes with two Intel Xeon E5-2695 sockets each. Piz Daint is a general-purpose scientific platform, but serves the climate and numerical weather prediction communities for research purposes.

The two Cray CS-Storm cabinets for MeteoSwiss numerical weather predictions are named “Kesch” and “Es-cha” which are the names in German (Piz Kesch) and in Rumanch (Piz d’Es-cha) of a peak in the Albula Alps of the Rhaetian Alps in Switzerland. The two new cabinets at ETH Zurich / CSCS are tightly packed. Each of them consists of 12 hybrid computing nodes for a total of 96 graphic cards or 192 graphic processors (GPUs) and 24 conventional CPUs. Kesch/Es-cha is reserved entirely for MeteoSwiss production forecast, as well as its internal model development.

In addition, ETH Zurich / CSCS provides computational test platforms for emerging technologies (ARM, Intel Xeon Phi) for advanced development. Also the climate and weather communities make use of these platforms.

 

Website

www.cscs.ch

UniMan

University of Manchester

The University of Manchester is one of the top research-led universities and can lay claim to 25 Nobel Prize winners amongst its current and former staff and students, including 4 current Nobel laureates. The School of Computer Science plays important roles in the two EU FET flagship projects (Graphene and Human Brain Project) and collaborates with the Square Kilometre Array (SKA) experiment headquartered in the university’s Jodrell Bank Observatory.

The School has a long and distinguished research record, including the development of the first stored program computer the late ‘40s, and the development of virtual memory among a range of innovations in the Atlas computer in the early ‘60s (the UK first supercomputer). The Advanced Processor Technologies group (APT) continues the excellent record in high performance low-power computer systems, and encompasses a range of research activities addressing the formidable complexity of both software and hardware for the many-core systems of the future. The APT group brings together more than 60 researchers (faculty, fellows, PhD students) and is one of the few centres of excellence able to design complex silicon as demonstrated by SpiNNaker; a one million ARM cores massively parallel architecture. APT has helped the EU competitive position with commercialization examples such as the ICL Goldrush Database server, Amulet processors (Low-power architectures) bought by ARM Ltd., Transitive Corporation (Virtualization and Binary Translation) bought by IBM and Silistix Ltd (Networks-on-Chip).

Role in the project

In ESiWACE2, UNIMAN contributes expertise in the development and optimisation of weather and climate applications on high-performance systems, contributing to tasks on Domain Specific Languages and technology watch in WP2 and on post-processing and analytics in WP5.

Names of the colleagues involved

Dr. Graham Riley, Dr. Mike Ashworth

Website

https://www.manchester.ac.uk

NLeSC

Netherlands eScience Center

The Netherlands eScience Center is the Dutch national center of excellence for the development and application of research software to advance academic research. The eScience Research Engineers at the eScience center work together with researchers in academia, enabling them to address compute-intensive and data-driven problems within their research. The eScience Research Engineers are researchers that typically hold a PhD, and have expertise on state-of-the-art computational technologies, as well as a keen interest in developing of research software. The Netherlands eScience Center is involved in more than 90 collaborative research projects, spanning many different research disciplines and application domains, of which 11 currently projects are in the Climate Sciences.

Role in the project

In ESiWACE2, the Netherlands eScience Center leads WP3 (HPC services). In particular, the coordination of the HPC services for preparing the community for pre-exascale that are offered as part of WP3. The Netherlands eScience Center will also be involved in providing services on model portability and refactoring WP3 (T2).

Names of the colleagues involved

Dr. Ben van Werkhoven

Relevant infrastructure and services available for climate & weather

The Netherlands eScience Center does not own computing facilities. Instead, NLeSC is a participant in the DAS-5, a six-cluster wide-area distributed system that employs a large variety of HPC accelerators, including GPUs and FPGAs. DAS-5 is a developing platform for parallel and distributed applications and is not intended for long production runs.

In terms of services the eScience center is involved in many projects related to weather & climate, for a full list see: https://www.esciencecenter.nl/projects

Website

https://www.esciencecenter.nl

MeteoSwiss

Federal Office of Meteorology and Climatology

The Federal Office of Meteorology and Climatology MeteoSwiss is by federal mandate the national provider for weather and climate services in Switzerland. In this role, it serves the general public, authorities, research and industry. MeteoSwiss monitors the atmosphere over Switzerland and operates the corresponding networks, it issues weather forecasts, warns the authorities and the general public of dangerous weather conditions and also monitors the Swiss climate. The legal duties include the provision of climate information and climatological services for the benefit of the general public. MeteoSwiss provides generic and tailor-made datasets and services for customers, and conducts research on themes from weather and climate to high-performance computing. Weather and climate in the Alpine region is one of its core competences. MeteoSwiss hosts the national GCOS office and is the official representative of Switzerland in various international organisations (WMO, ECMWF, EUMETSAT, EUMETNET etc.) and member of the Swiss Centre for Climate Systems Modelling (C2SM). In its research MeteoSwiss collaborates with academia (e.g. ETH Zurich and Swiss National Supercomputing Centre CSCS), with other governmental offices (e.g. hydrology) and the private sector (e.g. reinsurance). In the framework of the Swiss HP2C and PASC Initiatives, MeteoSwiss has led the adaption of the regional weather and climate model (COSMO) to hybrid high-performance computing systems and has spearheaded the application of domain-specific languages in operational atmospheric codes. On the basis of its numerical weather forecasting models, MeteoSwiss has issued weather and climate forecasts to commercial customers for over ten years now and has also a profound experience in the communication of such forecasts to the public and media.

Role in the project

MeteoSwiss will contribute to WP2 "Establish and watch new technologies for the community".

Names of the colleagues involved

Oliver Fuhrer, Xavier Lapillone, Carlos Osuna, Tobias Wicky

Relevant infrastructure and services available for climate & weather

Dedicated resources for R&D at the GPU based machine "kesch".
Easy access to computing and support resources provided by CSCS, namely to "piz daint". 

Website

https://www.meteoswiss.admin.ch

DDN

Data Direct Networks France

For almost 20 years, DDN (http://www.ddn.com/) has designed, developed, deployed and optimized systems, software and solutions which enable enterprises, service providers, universities and government agencies to generate more value and accelerate time to insight from their data and information, on premise and in the cloud. DDN systems are now powering more than 70% of the Top500 companies.

With around 2/3 of its workforce in R&D DDN is a company leading innovation in the field of high performance storage. DDN has recently acquired the HPC File System department from Intel and an enterprise oriented virtualization company named Tintri.

These two moves illustrate the convergence between HPC and Entreprise, the latest success being the launch of the DDN A3I a NVIDIA DGX reference architecture solution to help businesses deploy AI infrastructure quickly, with predictable performance and easier manageability for IT.

 

The past 3 years DDN Storage has established an R&D Center in Meudon, close to Paris, France. This facility is hosting the core development team of the Software Defined Storage group with more than 20 engineers, a large fraction of them holding a Ph.D. DDN is active in the European R&D ecosystem with participation in the ETP4HPC, involvement in teaching HPC I/O in European Universities (Versailles, Trieste, Evry). DDN Storage is an associated member of the Energy oriented Center of Excellence (EoCoE) DDN Storage is owning several facilities with HW prototypes in Europe, one in Paris, one in Dusseldorf. DDN Storage is an active open source actor as illustrated by its support of the open source Lustre High Performance File System.

MO

Mercator Ocean International

Between observation infrastructures and users, Mercator Ocean is a non-profit company employing a team of 60 persons which ensures the continuity from research to oceanographic operational services. Mercator Océan has nine research and operational governmental shareholders (Centre National de la Recherche Scientifique (CNRS), Institut pour la Recherche et le Développement (IRD), Météo-France (the French Meteorological office), Naval Hydrographic and Oceanographic Service (SHOM), Institut Français de Recherche pour l’Exploitation de la MER (IFREMER), the Euro-Mediterranean Center on Climate Change (CMCC s.r.l.), Met-Office (the UK Meteorological office), National Energy Research Scientific Computing Center (NERSC), and Puerto Del Estado. Over the last 15 years, Mercator Ocean has been playing a leading role in operational oceanography at international level and European level. After having successfully coordinated the European MyOcean projects since 2009, Mercator Ocean was officially appointed by the European Commission on November, 2014 to define, manage, implement and operate the "Copernicus Marine Environment Monitoring Service" (CMEMS) (as part of the European Earth observation program, Copernicus) on its current multi-annual financial framework 2014-2020. Mercator Ocean also defines and manages the service evolution and user uptake of the CMEMS activities.

Role in the project

In ESiWACE, MOI is in charge, with CMCC, of developing and testing the performances of a new global oceanic configuration at very high resolution (1/36°).

 

Names of the colleagues involved

Romain Bourdallé-Badie, Clément Bricaud, Yann Drillet

 

Relevant infrastructure and services available for climate & weather

MOI has access to the following High Performance Computers Centers:

  • Meteo-France super computers: 2 Bull clusters. Each computer is composed by more 1800 nodes with 40 processors by node and 64 Go by node.

  • ECMWF supercomputer: A cray XC30 with 3480 nodes of 24 processors.

  • Internal computing facilities: Bull computer with 66 nodes of 24 processors and 64Go by node.

 

Website

https://www.mercator-ocean.fr

Filed under: new_web_page
Upcoming Events
Container Hackathon for Modellers Dec 03, 2019 09:00 AM (Europe/Vienna) — Lugano (CH)
6th HPC workshop May 25, 2020 (Europe/Vienna) — Hamburg (DE)
ESiWACE2 Annual meeting 2020 May 27, 2020 (Europe/Vienna) — Hamburg (DE)
Previous events…
Upcoming events…
© Copyright ESiWACE 2015