Learning to see Stuff
Kate Storrs (Justus Liebig University Giessen)

back to Overview

Date: Wednesday, 27.10.21 15:20-17:00 CET

Signup: If you would like to attend the talks please register here to get a Zoom link.

Abstract:

Materials with complex appearances, like textiles and foodstuffs, pose challenges for conventional theories of vision. How does the brain learn to see properties of the world—like the glossiness of a surface—that cannot be measured by any other senses? Recent advances in unsupervised deep learning may help shed light on material perception. I will show how an unsupervised deep neural network trained on an artificial environment of surfaces that have different shapes, materials and lighting, spontaneously comes to encode those factors in its internal representations. Most strikingly, the model makes patterns of errors in its perception of material that follow, on an image-by-image basis, the patterns of errors made by human observers. Unsupervised deep learning may provide a coherent framework for how many perceptual dimensions form, in material perception and beyond.