Dec 31, 2023 from 09:00 AM to 01:30 PM
(Europe/Berlin / UTC100)



Add event to calendar



What is the reason why a Deep Learning model makes a certain prediction? What is the relevant part of the input? What would have to be changed about the input to change the output of the model? Such questions seem to be difficult to answer as ML models are black boxes and inspection is difficult. However, with techniques from the field of Explainable AI, it is possible to analyze models and unveil human-interpretable explanation for what they do. In our course, you will learn about the concepts of Explainable AI, focused on Deep Learning.

Contents level

in hours

in %

Beginner's contents:

4.5 h

30 %

Intermediate contents:

10.5 h

70 %

Advanced contents:

0 h

0 %

Community-targeted contents:

0 h

0 %


We assume that the participants are familiar with general concepts of machine learning and/or deep learning, such as widely used models, losses, regularization and basic model training / testing. Many excellent self-training resources are available such as:

Hands-on experience with ML/DL framework is required, first experience with HPC systems is helpful.

Target audience:

Master students, PhD students and Postdocs with interest in Machine Learning

Learning outcome:

After the course, you will be familiar with elementary concepts of Explainable AI and have first experience in the use.


This course is given in English.


5 half days


The course will probably take place in a week in November 2023, 09:00 - 13:00 each day.



Number of Participants:

maximum 40


Dr. Sabrina Benassou