With this project we want to experimentally investigate and quantify human behavior in sequential tasks that require human subjects to detect spatial or temporal visual targets. Importantly, we will develop normative computational models of these tasks to understand the observed human behavior. Such models, particularly those based on partially observable Markov decision processes (POMDP) have the property, that the different forms of uncertainties described above as well as behavioral goals quantified through rewards can be represented in interpretable and explainable forms. Here, perception and action are inseparably intertwined because sequential actions affect the belief about the state of the world and the expected future reward. The relevance of this research extends well beyond cognitive science, psychology, neuroscience and applied fields such as usability, as the human capabilities of handling multiple sources of uncertainty is not well understood and still unmatched by artificial systems.
A fundamental component of interacting with the world consists in acquiring task-relevant information to achieve our goals. What makes this a difficult problem is the inescapable probabilistic nature of our world. The uncertainty about the true state of the world comes from the fact that sensory measurements are often ambiguous, i.e. different states of the world can result in the same sensory measurements and one state of the world can result in different sensory measurements. Another source of uncertainty results from variability of internal responses, often characterized as internal noise. A further source of uncertainty results from the fact that the consequences of our actions are not always playing out as intended, for which internal and external causes are responsible. Similarly, the consequences of our actions with regard to the achievement of our goals, which can be delayed, are also usually governed by uncertainty. Finally, we are often confronted with situations in which we do not know the appropriate model describing the relationship of sensory measurements and the state of the world and this uncertainty needs to be reduced through learning. Thus, uncertainty is a fundamental and pervasive factor governing our interactions with the world.
Gaze selection is the epithomatic behavior in which these uncertainties come together. To reduce the uncertainty about our surrounding, we sequentially move our eyes towards different parts of the scene. Based on the acquired information and internal states we decide where to direct gaze next. Thus, sensing, deciding, acting, and learning are fundamentally intertwined. Accordingly, in an uncertain and dynamic world, it is particularly important to adopt strategies that handle these uncertainties in such a way, that we can successfully achieve our goals.
|Project:||Active vision: Regelung von Augenbewegungen und probabilistisches Planen|
|Project partners:||Technical University of Darmstadt, Prof. Constantin Rothkopf, PhD|
|Project duration:||2019-2022 (36 months)|
|Project funding||166 T EUR|
|Funded by:||DFG – Deutsche Forschungsgemeinschaft (German Research Foundation)|