WhiteBox – explainable models for human and artificial intelligence

The LOEWE Research Cluster WhiteBox is aimed at developing methods at the intersection between Cognitive Science and AI to make human and artificial intelligence more understandable

Project Introduction

Until a few years ago, intelligent systems such as robots and digital voice assistants had to be tailored towards narrow and specific tasks and contexts. Such systems needed to be programmed and fine tuned by experts. But, recent developments in artificial intelligence have led to a paradigm shift: instead of explicitly representing knowledge about all information processing steps at time of development, machines are endowed with the ability to learn. With the help of machine learning it is possible to leverage large amounts of data samples, which hopefully transfer to new situations via pattern matching. Groundbreaking achievements in performance have been obtained over the last years with deep neural networks, whose functionality is inspired by the structure of the human brain. A large number of artificial neurons interconnected and organized in layers process input data under large computational costs. Although experts understand the inner working of such systems, as they have designed the learning algorithms, often they are not able to explain or predict the system’s intelligent behavior due to its complexity. Such systems end up as blackboxes raising the question of how such systems’ decisions can be understood and trusted.

Our basic hypothesis is that explaining an artificial intelligence system may not be fundamentally different from the task of explaining intelligent goal-directed behavior in humans. Behavior of a biological agent is also based on the information processing of a large number of neurons within brains and acquired experience. But, an explanation based on a complete wiring diagram of the brain and all its interactions with its environment may not provide an understandable explanation. Instead, explanations of intelligent behavior need to reside at a computationally more abstract level: they need to be cognitive explanations. Such explanations are developed in computational cognitive science. Thus, WhiteBox aims at transforming blackbox models into developing whitebox models through cognitive explanations that are interpretable and understandable.

Following our basic assumption, we will systematically develop and compare whitebox and blackbox models for artificial intelligence and human behavior. In order to quantify the differences between these models, we will not only develop novel blackbox and whitebox models, but also generate methods for the quantitative and interpretable comparison between these models. Particularly, we will develop new methodologies to generate explanations automatically by means of AI. As an example, deep blackbox models comprise deep neural networks whereas whitebox models can be probabilistic generative models with explicit and interpretable latent variables. Application of these techniques to intelligent goal directed human behavior will provide better computational explanations of human intelligent behavior as well as allow to transfer human level behavior to machines.

News

Newsticker

January 2024 – In a publication in the renowned journal “Nature Human Behaviour”, WhiteBox researchers investigate the properties of behavioral economic theories automatically learned by artificial intelligence. The study highlights that cognitive science still cannot be easily automated by artificial intelligence and that a careful combination of theoretical reasoning, machine learning and data analysis is needed to understand and explain why human decisions are the way they are and deviate from the mathematical optimum. Read more (in German)

December 7, 2023– In the ProLOEWE discussion format “Hessen's cutting-edge research in 45 minutes”, WhiteBox discussed with interested members of the Hessian state parliament. The topic was: “AI – more human than expected?”.Read more

September 2023 – Joseph German joins the project – welcome to the team!

August 2023 – Claire Ott, Inga Ibs and Morteza Khosrotabar join the project – welcome to the team!

July 2023 – The WhiteBox team of researchers met in Kleinwalsertal from July 19 to 25 for the second physical retreat. The main topics were interdisciplinary scientific exchange, discussion of the current status of the work and planning of further work.

July 13, 2023 – WhiteBox hosted the inspiring invited talk of Michael Wibral “Information theory for the age of neural networks” and offered a Systems Neurophysiology Lab tour.

July 4, 2023Hessen schafft Wissen published a new animated video on project WhiteBox.

March 28, 2023 – Milestone 1 event, day 2: After a Research Data Management workshop, the project members met for the internal project evaluation and the first post-COVID project Steering Committee and Plenum in presence.

March 27, 2023 – Milestone 1 event, day 1: WhiteBox and the Centre for Cognitive Science hosted the “Symposium on Explainability”. The results of the WhiteBox project so far were presented and different facets of explainability were discussed with experts and guests. Learn more

February 2, 2023 – The WhiteBox team met for a robot juggling seminar @ Intelligent Autonomous Systems robot lab.

January 2023 – Ute Korn now officially joins the project – once again, welcome to the team!

December 2022 – A symposium addressing “Explaining adaptive vision” was held on 8 December 2022 with an international line-up of speakers, with the support and participation of WhiteBox and its members. Learn more

October 2022 – ProLOEWE celebrates its birthday, WhiteBox congratulates!
On the occasion of ProLOEWE's 10th anniversary, a special edition of ProLOEWE News has been published. A feature of the WhiteBox project can be found on pages 26+27. Learn more

August 2022 –The WhiteBox team of researchers met in Dahn from 8 to 12 August for the first physical retreat. The main topics were interdisciplinary exchange, discussion of the current status of the work and planning of further work, especially the development of new scientific synergies.

March 2022 – Meike Kietzmann joins the project – welcome to the team!

February 2022 – Asghar Mahmoudi Khomami joins the project – welcome to the team!

2021 – Ute Korn supports the project as an associated researcher since the end of 2021 – welcome to the team!

December 2021 – Due to the COVID-19 pandemic, the HWMK (Hessian State Ministry of Higher Education, Research and the Arts) extended the WhiteBox project duration until 31.12.2025.

November 2021 – Rabea Turon & Sven Schultze join the project – welcome to the team!

October 2021 – Two new introduction videos about project WhiteBox released in collaboration with ProLOEWE: Watch (1) Watch (2) (in German)

Read more

Project Details

  • Project: WhiteBox – explainable models for human and artificial intelligence (Erklärbare Modelle für menschliche und künstliche Intelligenz)
  • Project partners: Technical University of Darmstadt (TU Darmstadt)
  • Project duration: January 2021 – December 2025
  • Project funding: 4.7 Mio EUR
  • Funded by: Hessian State Ministry of Higher Education, Research and the Arts
  • Funding Line: LOEWE Research Cluster, Funding Round 13