Neural Circuits for Reinforcement Learning and Mental Simulation
Kenji Doya

back to Overview

Date: Wednesday, 19.07.2023 11:00 CET

Location: Building S1|15 Room 133

Abstract:

In the standard “model-free” reinforcement learning, an agent learns an action policy solely from the experiences of state-action-reward sequences. In the “model-based” framework, an agent initially learns an internal model of state transition, state-action-next state, and employs it to plan action sequences to achieve a goal or estimate the current state from the past state and action, considering sensory uncertainty. Numerous studies indicate that the basal ganglia play a crucial role in model-free reinforcement learning. However, the neural mechanism of model-based reinforcement learning through mental simulation of imaginary states is less clear and remains an important topic of research. Our functional brain imaging study aims to elucidate the entire brain circuit connecting the cerebellum, basal ganglia, and cerebral cortex for mental simulation. Additionally, we will provide updates on our ongoing study regarding imaging the local circuit dynamics for mental simulation.