When

Sep 09, 2023 to Sep 10, 2023
(Europe/Berlin / UTC200)

Where

1850 Table Mesa Dr, Boulder

Add event to calendar

iCal

WORKSHOP

Model simulations are essential tools for understanding weather and climate. As we adapt to our changing climate, simulation codes inform both our understanding and policy decisions. These complex software artifacts are often the result of multiple decades of development. And they are in a state of near-constant development as scientific capabilities advance and high-performance computing (HPC) technologies evolve.

Given the societal importance of these codes, maintaining confidence and preserving code quality and reliability is critical. Yet scientific computing applications are often developed without the use of extensive software verification tools and techniques. Instead, development practices are typically dominated by short-term concerns about performance, resources, and project timelines. Technical challenges in running and evaluating climate and weather models further complicate code verification efforts. Given the scale of these models, a thorough correctness evaluation may be prohibitively expensive. It is also customary to require regression tests to yield bitwise identical results. This requirement is often unmet due to the chaotic nature of climate and weather models and the large variety of hardware/software environments they are run on. When bitwise identical results cannot be sought, field experts are to evaluate model results in a time-consuming and subjective manner.

In short, climate and weather modeling communities are in need of practical and feasible means of ensuring correctness and reproducibility. For example, we are interested in means to easily assess whether changes to a model code result in output that is systematically different or introduce artifacts that could influence scientific conclusions. Such changes may include hardware or software stack infrastructure differences, replacing parts of the model with ML-routines, or applying data compression to the output data. In this workshop, we aim to provide a venue to discuss challenges, opportunities, and recent advances in ensuring software correctness and reproducibility for climate and weather modelers, HPC community members, and industry partners.

SCOPE

Topics of interest include but not limited to:

  • Tools and approaches for software testing, debugging, quality assurance, and continuous integration.
  • Statistical and ensemble-based approaches for evaluating model consistency and software correctness.
  • Software design approaches and development practices for streamlining correctness and reproducibility efforts.
  • Formal methods, abstraction, and logical proof techniques for rigorous verification.
  • Verifying and validating large-scale applications running on HPC clusters, cloud computing systems, heterogeneous systems, GPUs, etc.
  • Other software correctness and reproducibility approaches for facilitating verification and validation.

Submissions may include technical results, approaches, experiences, and opinions involving one or more of the above topics applied to:

  • Climate and weather simulation codes such as drivers, couplers, frameworks, and model components.
  • External libraries and packages used in climate and weather simulation applications.
  • Artificial intelligence techniques, such as machine learning and deep learning, applied to climate and weather software.
  • Diagnostics, post-processing, visualization tools, and libraries.
  • Packaging, environment management, version control, and porting techniques for facilitating reproducibility.
  • Other software development and approaches extensively used within the climate and weather simulation context

REGISTRATION

  • In person fee: $50, Virtual fee: $25

DATES

  • Abstract submissions due: August 1, 2023.
  • Notification of acceptance: August 31, 2023.
  • Registration deadline: October 20, 2023 (in person), November 3, 2023 (virtual)
  • Workshop date: November 9-10, 2023.