Understanding the integration of knowledge and working memory with computational models
Brad Wyble

back to Overview

Date: Wednesday, 09.11.22 15:20 CET

Abstract:

Understanding the structure of memory representations remains one of the great unanswered questions of cognitive science. Our ability to create memories from visual experience on-the-fly allows information to bridge from one moment to the next even when the physical information is not available to the senses. The remarkable aspect of this working form of human memory is its flexibility. We create memories that are pertinent to our ongoing goals, and that exploit our knowledge of the visual world, which makes it easier to remember familiar shapes than unfamiliar ones. This talk will discuss a new computational model of working memory called the Memory for Latent Representations (Hedayati, O’Donnell, & Wyble, 2022, Nature Human Behavior), which stores copies of one or more visual shapes in active representations. Rather than viewing WM and Long Term Memory as two separate stores, the MLR model implements a tight integration between these representations. A deep learning model called a variational autoencoder provides a hierarchy of latent spaces that are analogous to the ventral visual stream. Working memory stores selective information from these latent spaces, storing them in a binding pool (Swan & Wyble 2014, Attention Perception & Psychophysics). The MLR model builds flexible memories that emphasize different levels of abstraction depending on task requirements. Furthermore, the reconstructive aspect of the autoencoder allows the model to produce output in the same format as its input, providing a form of visual imagery. We simulate basic empirical facets of visual WM, and discuss implications of the model for our understanding of compression, categorization, compositionality and visual imagery.