A Generative Model of Visual Foraging
Alasdair Clarke

back to Overview

Date: Wednesday, 08.02.2023 15:20 CET

Location: Building S1|03 Room 223

Abstract:

A standard approach to analysing data from cognitive psychology experiments is to make use of summary statistics such as accuracy and response time. While this approach allows us to easily analyse our data using tools such as ANOVA, it typically limits our description of human behaviour to a difference between group averages in summary statistics. An increasingly popular alternative approach is to develop a generative model that is able to account for participants' behaviour during a trial.

In this talk I will demonstrate how this approach can be applied to the visual foraging paradigm, a visual search task in which participants must find multiple targets sequentially. Studies typically have two different target types that participants must find as well as distractors to be ignored. A key result is that for relatively easy target discriminations (feature search), participants switch frequently between target types; however, if the targets are more difficult to distinguish from the distractors (conjunction search) the majority of participants tend to forage in ‘runs’ of the same target type. Each trial is usually characterised by the maximum run length and the number of runs, and these measures are used for analysis. A limitation of these approaches is that the measures used are interdependent and are influenced by the number of targets present in the scene. We present an alternative strategy which involves modelling the process as a generative sampling without replacement procedure, implemented in a Bayesian multilevel model. This allows us to break down behaviour into a number of independent biases that influence target selection, including proximity of targets, a bias for selecting targets in runs and a bias for a particular target type, in a way that is not dependent on the number of targets present. Our method therefore facilitates direct comparison between different studies using different parameters. We demonstrate the use of our model with simulation examples and re-analysis of existing data. A key finding from our approach is that spatial features such as proximity and direction are some of the most predictive for behaviour, but are often overlooked in the published literature in this area. We believe our model will provide deeper insights into visual foraging data, providing a foundation for further modelling work in this area.