New Emmy Noether group investigates explainable AI for image and video analysis

Extending XAI to dense visual tasks

2024/03/04 by

The German Research Foundation has accepted Dr. Simone Schaub-Meyer into its Emmy Noether Programme. Together with her new junior research group, Dr. Schaub-Meyer aims to research and develop methods that increase the understanding of widely used models of artificial intelligence (AI) in image and video analysis and improve their robustness. The funding for the first three years amounts to around 1.1 million euros, and includes the funds for two doctoral positions as well as 8 high-quality graphics processors.

Dr. Simone Schaub-Meyer's research focuses on the development of efficient, robust and comprehensible methods and algorithms for image and video analysis.

New methods in artificial intelligence, especially deep learning, have also brought tremendous progress in the field of computer vision. Computer-aided vision makes it possible, for instance, to recognise objects in driver assistance systems or to detect diseased tissue in medical images. It is particularly important in such safety-critical and legally relevant areas of application that the processes and models are trustworthy. As a rule, the performance of a procedure is measured and compared against defined training data sets. But what happens when rare scenarios or scenarios that deviate from the test data occur? In most cases, the behaviour of the AI will then be difficult to understand, if it can be understood at all, and it will almost be impossible to make predictions.

Dr. Simone Schaub-Meyer wants to change that with her research in the field of explainable artificial intelligence (AI). She investigates the behaviour and decision-making process of artificial neural networks and thus tries to draw conclusions with regard to their robustness and generalisability. Her focus is on what are known as dense visual tasks. Each pixel in the image is classified – e.g. “car”, “street”, “pavement” or “person”. In semantic segmentation, for instance, the algorithms not only record whether there is a car in the picture, but also where it is.

The extension of XAI to dense visual tasks is an essential and necessary step in increasing the understanding of widely used models in image and video analysis and improving their robustness.

Dr. Simone Schaub-Meyer

Developing interpretable explanatory methods

The FunnyBirds framework developed by Schaub-Meyer can be used to evaluate explainable AI methods. By removing individual parts of the bird and measuring the change in its performance, the scientists were able to approximate the significance of the individual parts for the fundamental truth. In this example, the beak is more important than the feet.
The FunnyBirds framework developed by Schaub-Meyer can be used to evaluate explainable AI methods. By removing individual parts of the bird and measuring the change in its performance, the scientists were able to approximate the significance of the individual parts for the fundamental truth. In this example, the beak is more important than the feet.

In her project XIVA – eXplainable Image and Video Analysis funded by the Emmy Noether Programme, Dr. Schaub-Meyer will apply the methods of XAI specifically to image and video analysis. Her aim is to develop interpretable explanatory methods for spatial and spatio-temporal visual tasks, such as image/video segmentation and motion estimation. The insights gained in this way should in turn help to improve the models themselves and their robustness.

To this end, she will first analyse the predictive performance of existing models that already offer novel measurands that can be interpreted by humans. The models’ global weaknesses and strengths should become visible in direct comparison.

Another aim is the development of local attribution methods that can process and visualise spatial and spatio-temporal decision-making processes.

Dr. Schaub-Meyer specifically focuses on self-interpreting models for dense prediction tasks. These are inherently better suited to providing explanations and increasing robustness.

In the final step, the scientist will evaluate the developed approaches with suitable, novel data sets and reference values in order to assess their explainability and robustness.

“The Emmy Noether Programme will enable me to develop further personally and to expand my group. I am therefore looking for two more Ph.D. candidates from this summer. I am looking forward to tackling these challenging but important research questions with my group in the inspiring environment of TU Darmstadt and the Hessian Centre for Artificial Intelligence.”

About the person

Simone Schaub-Meyer is an independent research group leader at the Technical University of Darmstadt and a member of the Hessian Centre for Artificial Intelligence – hessian.AI.

There, she also heads a DEPTH research group funded by HMWK as part of the cluster project The Third Wave of Artificial Intelligence (3AI). The focus of her research is on the development of efficient, robust and comprehensible methods and algorithms for image and video analysis. Before founding her own group, she worked as a post-doctoral researcher in Prof. Stefan Roth’s Visual Inference Lab.

Prior to that, she worked as a post-doc in the field of augmented reality at the Media Technology Lab at ETH Zurich. Her doctorate was completed in collaboration with Disney Research Zurich under the supervision of Prof. Dr. Markus Gross at ETH Zurich. In her dissertation, which was awarded the ETH Medal, she developed novel methods for motion estimation and video image interpolation.

About the programme

With the Emmy Noether Programme , the German Research Foundation (DFG) enables particularly qualified young scientists to prepare for a professorship. The independent leadership of a research group at a university or research institution as well as the associated teaching tasks offer the opportunity to learn and demonstrate the skills necessary for a vocation. The prerequisites include a PhD with outstanding results and high-level publications.