Until a few years ago, intelligent systems such as robots and digital voice assistants had to be tailored towards narrow and specific tasks and contexts. Such systems needed to be programmed and fine tuned by experts. But, recent developments in artificial intelligence have led to a paradigm shift: instead of explicitly representing knowledge about all information processing steps at time of development, machines are endowed with the ability to learn. With the help of machine learning it is possible to leverage large amounts of data samples, which hopefully transfer to new situations via pattern matching. Groundbreaking achievements in performance have been obtained over the last years with deep neural networks, whose functionality is inspired by the structure of the human brain. A large number of artificial neurons interconnected and organized in layers process input data under large computational costs. Although experts understand the inner working of such systems, as they have designed the learning algorithms, often they are not able to explain or predict the system’s intelligent behavior due to its complexity. Such systems end up as blackboxes raising the question of how such systems’ decisions can be understood and trusted.