Explaining Decisions: On Problem Solving and Analogical Reasoning in Human-Centered Explainable AI

Author: Claire Ott

Referees:
Prof. Dr. Frank Jäkel
Prof. Dr. Kai-Uwe Kühnberger

Defense: 23.10.2025

Abstract:

Many problems humans face require making consecutive decisions while gathering information about the problem, its states, possible actions and goals. These sequential decisions can be done under uncertainty or in changing environments. Algorithms are often used to help people find good solutions, especially when the problem requires the optimization of certain aspects like finding the cheapest, fastest or most profitable solution. These problems are often too complex to be solved by humans alone, but also include weak constraints or preferences that are hard to pin down, so they cannot be done entirely by machines and humans have to work with machine assistance to solve them. Such collaboration requires a good understanding of the suggested solutions, and can be greatly improved when machines use similar representations and reasonings to their human coworkers and provide adequate explanations.

The first step towards generating such explanations is to analyze different problem-solving tasks. A task's states, actions, transitions and goals can have various features of how the task presents itself to the problem solver. We looked at different tasks that were used in problem-solving and sequential decision-making experiments and defined ten descriptive features. These determine the structure of the search space and subsequently influence how humans and algorithms can solve the task and what heuristics and representations are used by participants. We compared 22 sequential decision-making tasks using these ten features.

One such decision-making paradigm is the Furniture Factory, a management task we designed based on a resource allocation Mixed Integer Linear Program. We used the task in two different forms, an exploration task and a more gamified version with sequential decisions that could not be reversed. Participants had to use a given amount of various resources to build four kinds of furniture items with the goal of maximizing the resulting profit. Participants performed well in general and often found satisfactory solutions, but had a hard time finding or recognizing optimal solutions. We gathered participants' concurrent reasoning and their post-hoc explanations in two qualitative studies. We formalized the heuristics participants mentioned and analyzed their complexities, to use them as the basis for automated explanations. A third behavioral study was conducted to gather a larger dataset to compare the found heuristics to participants' action trajectories. Here, we could match 87\% of participants' decisions with at least one heuristic.

The heuristics were used in a generalized form to generate a step-by-step rationale for the optimal solution for resource allocation and requirement satisfaction Linear Programs (LPs). We implemented a tool, SimplifEx, that uses various simplification and preprocessing methods to structure an LP and provides post-hoc explanations based on the heuristics for the different items in an optimal solution. Additionally, it produces a graph representation of the different steps and dominance relations between variables. These relations add valuable structure to an LP and its optimal solution, so we generalized the definition of dominance to include more cases.

The comparison of variables based on their values can add structure and give valuable insights on the relation of the variables in an LP. Humans often use comparisons in their reasoning and explanations. They do not just compare single entities based on their features, but also draw sophisticated analogies between known and novel problems to solve them.

Analogies are a staple of human cognition and a common tool in conveying and explaining novel concepts. We introduced two novel approaches of formalizing domains and combining them using Category Theory (CT). The first approach uses colored directed multigraphs, while the second approach represents the base and the target domain of an analogy as separate categories. The CT concept of a pullback is used to generate a representation of possible matches between two domains, and pushouts are used to combine the two domains to a new blended domain.

In summary, this thesis combines several approaches of analyzing and comparing sequential decision-making problems and how people represent and solve them with the goal of using this knowledge to generate cognitively adequate explanations. Such explanations can build the basis for making assistant tools more accessible and improving the collaboration of humans and machines.