Symposium on Explainability 2023 Abstracts
Monday, 27.03.2023

Below you can find the abstracts to for the talks of the WhiteBox Symposium on Explainability 2023

Time: 10:00

Speaker:

Carlos Zednik
Eindhoven University of Technology, Netherlands

Abstract:

The discipline of Explainable AI is plagued by an inconsistent use of terminology. Although it is widely agreed that transparency is important, it is unclear how it should actually be achieved through explanation, interpretation, and justification. In this talk, I aim to provide conceptual clarity by integrating these and related terms within a larger normative framework for explainable AI. By considering the distinct knowledge requirements of different stakeholders and how those requirements might be met by different analytic techniques, I aim to clarify what explainable AI can actually do, what it cannot do, and what role it plays in the responsible development and use of artificial intelligence.

*** presentation cancelled due German traffic strike on Monday, 27 March ***

Speaker:

Ruth M.J. Byrne
Trinity College Dublin, University of Dublin, Ireland

Abstract:

Insights from cognitive science about how people understand explanations can be instructive for the use of explanations in eXplainable Artificial Intelligence (XAI). People often create counterfactual explanations for their decisions, in which they consider how an outcome would have been different, if some antecedent events had been different. Counterfactual explanaitions are closely related to causal ones, but people tend to reason differently about counterfactual and causal explanations. I discuss recent experimental discoveries that people focus on controllable actions when they generate counterfactual explanations but not causal ones, and that people tend to envisage multiple possibilities when they understand counterfactuals but not causals. I also discuss current empirical findings that people tend to subjectively prefer counterfactual to causal explanations for an AI system’s decision, in familiar and unfamiliar domains; but there are few differences in people’s objective accuracy in predicting an AI system’s decisions whether they have been given counterfactual or causal explanations. I suggest that central to the XAI endeavour is the requirement that automated explanations provided by an AI system must make sense to human users.

Time: 15:00

Speaker:

Wojciech Samek
TU Berlin, Fraunhofer HHI

Abstract:

The emerging field of Explainable AI (XAI) aims to bring transparency to today's powerful but opaque deep learning models. This talk will present Concept Relevance Propagation (CRP), a next-generation XAI technique which explains individual predictions in terms of localized and human-understandable concepts. Other than the related state-of-the-art, CRP not only identifies the relevant input dimensions (e.g., pixels in an image) but also provides deep insights into the model’s representation and the reasoning process. This makes CRP a perfect tool for AI-supported knowledge discovery in the sciences. In the talk we will demonstrate on multiple datasets, model architectures and application domains, that CRP-based analyses allow one to (1) gain insights into the representation and composition of concepts in the model as well as quantitatively investigate their role in prediction, (2) identify and counteract Clever Hans filters focusing on spurious correlations in the data, and (3) analyze whole concept subspaces and their contributions to fine-grained decision making. By lifting XAI to the concept level, CRP opens up a new way to analyze, debug and interact with ML models, which is of particular interest in safety-critical applications and the sciences.