A simple model of visual localization across saccades
Research Unit 1, SCIoI Project 23
Intelligent sensorimotor systems must routinely decide whether a sensory change is induced by an external change in the world or a self-induced change due to their own movement relative to the world — a capability that gives rise to the perception of a stable world. Whereas much is known about how visual information is processed in a hierarchy of visual areas, we still lack a mechanistic understanding how visual information is combined with other sources of information to achieve such stability. In this project, we aim to better understand the interplay of motor, visual, and non-visual sensory information on an algorithmic level. To achieve this, we will devise and implement closed-loop control models that exhibit the characteristics of biological perceptual and motor systems.
We will study the most frequent of all human actions, saccadic eye movements, as a model behavior. These visual actions are simple motor acts yet they induce massive immediate consequences on the sensory signals (e.g., a rapid smearing of the visual input as well as large displacements of stationary objects across the retina) that are in stark contrast to the perceptual stability that defines visual experience. By modeling the oculomotor system as a closed-loop system, we will address two major questions:
(1) How do perception and action mutually inform each other, that is, how does the visual system utilize and weight information provided by efferent, visual, and proprioceptive sources, to give rise to perceptual stability (sensor-fusion problem)?
(2) How are motor plans and sensorimotor contingencies adjusted following discrepancies between expected and actual sensory signals to enable accurate actions upon the world (control problem)?