When

Jul 07, 2024 to Jul 12, 2024
(Europe/Berlin / UTC200)

Where

Kobe, Japan

Add event to calendar

iCal

Details about the 2024 International HPC Summer School

Graduate students and postdoctoral scholars from institutions in Australia, Europe, Japan and the United States are invited to apply for the 13th International High Performance Computing (HPC) Summer School, to be held on 7-12 July, 2024 in Kobe, Japan, hosted by the RIKEN Center for Computational Science. Applications to participate in the summer school will be accepted until 23:59 AOE January 31, 2024.

The summer school is sponsored by the RIKEN Center for Computational Science (R-CCS), the EuroHPC Joint Undertaking (EuroHPC JU), the Pawsey Supercomputing Research Centre (Pawsey) and the ACCESS program. Additional sponsors, who will conduct separate, internal selection processes, include EPCC (U.K.) and NICIS CHPC (South Africa). It is important to note that certain places for the 2024 school are still being offered on a preliminary basis and will be confirmed subject to funding availability.

In a nutshell

The summer school will familiarize the best students in computational sciences with major state-of-the-art aspects of HPC and Big Data Analytics for a variety of scientific disciplines, catalyze the formation of networks, provide advanced mentoring, facilitate international exchange and open up further career options.

Leading computational scientists and HPC technologists from partner regions will offer instruction in parallel sessions on a variety of topics such as:

  • HPC and Big Data challenges in major scientific disciplines: You will receive short, high-level introductions to a variety of science areas, with a focus on HPC-related simulation approaches and algorithmic challenges in the respective fields. 
  • Shared-memory programming: Using OpenMP, you will learn how to use the multiple cores present in modern processors, as well as related issues and optimizations.
  • Distributed-memory programming: for those who already know the basics of programming with the Message Passing Interface (MPI) you will learn about how to optimize performance based on the way the MPI library works internally, and more advanced MPI functionality.
  • GPU programming: Building on the OpenMP techniques taught earlier, you will learn how to program graphics processing units (GPUs), which are important enabler of modern scientific computing and machine learning.
  • Performance analysis and optimization on modern CPUs and GPUs: You will learn the basics of performance engineering, how to collect profiles and traces, and how to identify potential performance bottlenecks based on the collected profiles and traces.
  • Software engineering: You will learn state of the art technical approaches and best practices in developing and maintaining scientific software.
  • Numerical libraries: You will learn how to take advantage of already implemented algorithms in your code.
  • Big Data analytics: You will learn how to use the powerful and popular Spark framework to analyse very large data sets and integrate this with machine learning techniques.
  • Deep learning: You will extend already learned machine learning techniques into the leading edge with deep learning (also known as neural networks) using the standard TensorFlow framework.
  • Scientific visualization: You will learn how to use 3D visualization tools for large scientific data sets.
  • Canadian, European, Japanese, Australian and U.S. HPC-infrastructures: You will learn about resources available in your part of the world, as well as how to gain access to these resources.

The expense-paid program will benefit scholars from Australia, European, Japanese and U.S. institutions who use advanced computing in their research. The ideal candidate will have many of the following qualities, however this list is not meant to be a “checklist” for applicants to meet all criteria:

  • A familiarity with HPC, not necessarily an HPC expert, but rather a scholar who could benefit from including advanced computing tools and methods into their existing computational work
  • A graduate student with a strong research plan or a postdoctoral fellow in the early stages of their research careers
  • Regular practice with, or interest in, parallel programming
  • Applicants from any research discipline are welcome, provided their research activities include computational work.
  • Please note the specific track prerequisites below.

Choose your track!

The first two days of the program comprise two tracks that run concurrently. You need to choose your preferred track in your application. Please find more information about each track below.

Whilst all applicants can pick Track 1, Track 2 is available only to applicants satisfying all prerequisites specified in the corresponding section below.  The rest of the program comprises additional topics, including machine learning, big data analytics, scientific visualisation, performance analysis, software engineering, numerical libraries and more that are available to all participants.

Track 1: An introduction to shared-memory parallelism and accelerator programming

This track focuses on single-node programming, and provides techniques to parallelize codes over multiple CPUs, as well as offload processing to GPUs. It will also teach techniques to tackle challenges commonly faced in shared-memory programming and GPU offloading, such as load-balancing, data race protection and GPU offloading latency hiding. Track 1 is the opportunity to get a first experience with parallel programming.  Therefore, Track 1 is most valuable for students who are discovering parallel programming.

This track will be taught using OpenMP, a technology that has been the major solution to shared-memory programming in HPC for the better part of the last three decades. In addition to being highly optimized, this directive-based solution allows one to incrementally parallelize codes, which makes it widely popular.

Prerequisites: this track has no prerequisite other than a familiarity with compiled programming languages as the teaching material is available in C and FORTRAN.

Track 2: Advanced distributed-memory programming

This track focuses on multi-node programming with the MPI library, leveraging the processing power, memory bandwidth and filesystem IO available across distributed-memory architectures. Track 2 is the opportunity for students with existing MPI experience to learn to write efficient scalable programs for large HPC systems by understanding more about: the internals of the MPI library; advanced use of collective operations; MPI derived datatypes.

Prerequisites: due to the more advanced nature of this track, participants are required to have existing knowledge of basic MPI programming. Participants must already be able to compile and run an MPI program, construct simple point-to-point communications, use basic collective operations such as broadcast and reduce, use non-blocking operations and construct basic derived datatypes. Similarly to Track 1, the teaching content in this track will be provided in C, C++ or Fortran.

Diversity

The IHPCSS is committed to diversity and inclusion in high performance computing.  We welcome applications from all students regardless of race, color, sex, religion, sexual orientation, gender identity, or disability status.

If you have any questions regarding your eligibility or how this program may benefit you or your research group, please do not hesitate to contact the individual associated with your region below. 

Health & Safety

The school is currently being organised as an in-person event (no remote participation) in order to create the best experience for all attendees. Selected participants are expected to abide by local COVID-19 rules and measures, if any, which will be posted on this website prior to the school.

Costs

School fees, meals and housing will be covered for all accepted applicants to the summer school. Reasonable flight costs will also be covered for those travelling to/from the school.