PlexPlain - Explaining Linear Programs

PlexPlain („Erklärende KI für Komplexe Lineare Programme am Beispiel intelligenter Energiesysteme”) is a R&D project funded by the German Federal Ministry of Education and Research. Its goal is the automated generation of explanations for complex linear programs with a focus on applications in the energy sector.

TU Darmstadt’s Centre for Cognitive Science participates with its research groups and PIs: Models of Higher Cognition (Frank Jäkel, Project Leader), Artificial Intelligence and Machine Learning (Kristian Kersting), Psychology of Information Processing (Constantin Rothkopf). The application domain is represented by TU Darmstadt’s research group Energy Information Networks and Systems (Florian Steinke) and associated partners from energy industry: Siemens AG (Corporate Technology, Research in Energy and Electronics) in Munich, and Entega AG in Darmstadt.

As technical and social systems are increasing in complexity, Artificial Intelligence (AI) promises to help us manage these systems by providing support for planning and decision making. However, predictions and action policies generated by AI and Machine Learning are usually not transparent, i.e. AI-algorithms do not provide us with explanations for their solutions. In those applications where AI-support is most needed, systems often involve millions of variables easily, and their interaction is hardly understandable even for experts. This is due to the sheer size of those systems but is also a result of the complexity and opaqueness of AI algorithms.

Objectives: PlexPlain will investigate how human experts understand and explain complex systems and AI algorithms that support decision making for these systems. The goal is an (at least partially) automated generation of cognitively adequate explanations to also support non-expert users of AI. Applications will focus on examples from the energy sector, e.g. policies for the transition to renewable energy and prediction of the market price for electricity, but will open up to other problem domains during the project as well.

Methodology: PlexPlain will conduct behavioural studies to examine how humans develop an understanding of linear programs. The observed human strategies will be used to develop algorithms to simplify linear programs, translate them into graphical models, and finally generate cognitively adequate explanations. PlexPlain will exploit the fact, that linear programming, i.e. the optimization of linear objective functions with linear constraints, is a fundamental and widely used AI method for optimization and planning in complex systems. In addition, a variety of current other methods in Machine Learning and AI can be analysed with linear programming as well, e.g. neural networks or reinforcement learning. Therefore, linear programs represent a large and relevant class of problems for which AI should not just provide solutions but also cognitively adequate explanations.

Contacts:

Frank Jäkel, Florian Steinke

Conference, Journal and Magazine Articles:

[1] Jonas Hülsmann and Florian Steinke (2020): Explaining Complex Energy Systems: A Challenge. Poster presented at: Tackling Climate Change with Machine Learning – NeurIPS; December 11, 2020.

[2] Matej Zečević, Devendra Singh Dhami, Athresh Karanam, Sriraam Natarajan, Kristian Kersting (2021): Interventional Sum-Product Networks: Causal Inference with Tractable Probabilistic Models. Published in Proceedings of Neural Information Processing Systems 34. (opens in new tab)

[3] Jonas Hülsmann, Lennart J. Sieben, Mohsen Mesgar, Florian Steinke (2021): A Natural Language Interface for an Energy System Model. 2021 IEEE PES Innovative Smart Grid Technologies Europe (ISGT Europe), 2021, pp. 1-5, doi: 10.1109/ISGTEurope52324.2021.9640196

[4] Matej Zečević, Devendra Singh Dhami, Kristian Kersting (2022). Intriguing Parameters of Structural Causal Models. arXiv preprint arXiv:2105.12697, under review at IJCAI 2022.

[5] Matej Zečević, Devendra Singh Dhami, Constantin Rothkopf, Kristian Kersting (2022). Structural Causal Interpretation Theorem. arXiv preprint arXiv:2110.02395, under review at ICML 2022.

[6] David Steinmann, Matej Zečević, Devendra Singh Dhami, Kristian Kersting (2022). Machines Explaining Linear Programs. Under review at ICML 2022.

[7] Florian Peter Busch, Matej Zečević, Devendra Singh Dhami, Kristian Kersting (2022). Attributions Beyond Neural Networks: The Linear Program Case. Under review at ICML 2022.

[8] Matej Zečević, Devendra Singh Dhami, Kristian Kersting (2023). Interventions in Graph Neural Networks Lead to New Neural Causal Models. arXiv preprint arXiv:2109.04173, under review at TMLR.

[9] Claire Ott, Inga Ibs, Constantin Rothkopf, Frank Jäkel (2022). Leveraging Human Optimization Strategies for Explainable AI. Talk at Workshop on Human Behavioral Aspects of (X)AI, 2022

[10] Hülsmann, Jonas, Julia Barbosa, and Florian Steinke (2023). Local Interpretable Explanations of Energy System Designs. Energies 16, no. 5: 2161. https://doi.org/10.3390/en16052161

[11] Matej Zečević, Florian Peter Busch, Devendra Singh Dhami, Kristian Kersting (2022). Finding Structure and Causality in Linear Programs. Published in International Conference on Learning Representations Workshop on “Objects, Structure and Causality”.

Theses:

[A1] Frodl, E. (2021): The Furniture Company: Building Games to Measure Human Performance in Optimization Problems. Bachelor’s Thesis, Advisors: F. Jäkel, C. Ott. Technische Universität Darmstadt, 2021.

[A2] Sieben, L. (2021): Natural Language Interface for an Energy System Design Tool. Master’s Thesis, Advisors: F. Steinke, J. Hülsmann. Technische Universität Darmstadt, 2021.

[A3] Seng, J. (2021): Causal Discovery in Energy System Models. Master’s Thesis, Advisors: F. Steinke, K. Kersting. Technische Universität Darmstadt, 2021.

[A4] Busch, F. P. (2022): Explaining Neural Network Representations of Linear Programs. Master’s Thesis, Advisors: Kersting, M. Zečević. Technische Universität Darmstadt, 2022.

[A5] Steinmann, D. (2022): Explaining Linear Programs via Neural Attribution Methods. Master’s Thesis, Advisors: Kersting, M. Zečević. Technische Universität Darmstadt, 2022.

[A6] Dotterer, S. (2022) Investigating the Influence of Different Cost-Profit Ratios on Human Performance and Strategies in Optimization Problems in an Eye Tracking Experiment. Bachelor’s Thesis, Advisors: Rothkopf, C. A., Ibs, I. Technische Universität Darmstadt, 2022

[A7] Uetz, P. (2022): Investigation of the Vulnerability of Energy System Models to Adversarial Attacks. Master’s Thesis, Advisors: F. Steinke, J. Hülsmann. Technische Universität Darmstadt, 2022.

[A8] Pohl. A (2023): Die Heidelberger Struktur-Lege-Technik als Werkzeug zur Analyse subjektiver Theorien im Kontext des Planspiels Energiewende. Bachelor’s Thesis, Advisors: F. Jäkel, C. Ott. Technische Universität Darmstadt, 2023

[A9] Rödling, S. (2023): Providing Causal Explanations Over Time: An Extension of SCE for Time-Series Data. Master’s Thesis, Advisors: Kersting, M. Zečević. Technische Universität Darmstadt, 2023.

[A10] Hülsmann, J. (2024): Aspects of Explanations for Optimization-Based Energy System Models. Doctoral Thesis, Referees: Steinke, F., Jäkel, F. Technische Universität Darmstadt, 2024.

Preprints and Accepted Articles:

[p1] Inga Ibs, Claire Ott, Frank Jäkel, Constantin Rothkopf, (under review). From human explanations to explainable AI: Insights from constrained optimization.

[p2] Claire Ott and Frank Jäkel, (under review). SimplifEx: Simplifying and explaining linear programs.

[p3] Claire Ott, Inga Ibs, Constantin Rothkopf, Frank Jäkel, (in prep). Unveiling the relationship between tasks: Optimization as a case for taxonomic analysis.

[p4] Matej Zečević, Devendra Singh Dhami, Constantin Rothkopf, Kristian Kersting (2022). Causal Explanations of Structural Causal Models. arXiv preprint arXiv:2110.02395.

[p5] David Steinmann, Matej Zečević, Devendra Singh Dhami, Kristian Kersting (2022). Machines Explaining Linear Programs. arXiv preprint arXiv:2206.07194.

[p6] Florian Peter Busch, Matej Zečević, Devendra Singh Dhami, Kristian Kersting (2022). Attributions Beyond Neural Networks: The Linear Program Case. arXiv preprint arXiv:2206.07203.

[p7] Matej Zečević, Devendra Singh Dhami, Kristian Kersting (2023). Interventions in Graph Neural Networks Lead to New Neural Causal Models. arXiv preprint arXiv:2109.04173.

Project Details

Project: PlexPlain – Erklärende KI für Komplexe Lineare Programme am Beispiel intelligenter Energiesysteme
Project partners: Technical University of Darmstadt (TU Darmstadt)
Project duration: April 2020 – July 2023
Project funding: 1.23 Mio EUR
Funded by: German Federal Ministry of Education and Research (BMBF)
Grant no.: 01IS19081
Website: https://www.softwaresysteme.pt-dlr.de/de/ki-erkl-rbarkeit-und-transparenz.php
Final Report Download (opens in new tab)