Colloquia & Guest Speakers

Seeing Space Through Time: How Humans Process Visual Space

Professor Michele Rucci, Brain and Cognitive Sciences, Center for Visual Science, University of Rochester presenting live in G101

Monday, September 26, 2022
3:30 p.m.

In-person in Goergen 101 & Zoom



This talk focuses on how the human visual system establishes spatial representations. Unlike other sensory modalities, in which spatial information needs to be extracted from the incoming signals, visual perception starts with a sophisticated imaging system, the eye, that explicitly preserves spatial information on the retina. This may lead to the assumption that human vision is predominantly a passive spatial process: all that is needed is to transmit the retinal image to the cortex, like uploading a digital photograph, to establish a map of the scene. However, this deceptively simple analogy does not consider the strong temporal sensitivity of visual neurons and is inconsistent with theoretical models and experiments that study visual perception in the context of normal motor behavior. Here, I will review recent evidence in support of active space-time encoding, the idea that—as with other senses—vision relies heavily on motor strategies to encode spatial information in the temporal domain. I will describe some of the systems developed for testing these ideas and discuss implications for prosthetic devices and virtual reality displays.


M Rucci
Professor Michele Rucci

Michele Rucci is a professor of Brain and Cognitive Sciences at the University of Rochester and member of the Center for Visual Science. He received Laurea (MA) and PhD degrees in biomedical engineering from the University of Florence and the Scuola Superiore S. Anna in Pisa, respectively. He was then fellow in computational neuroscience at the Neurosciences Institute in San Diego and faculty at Boston University, where he was appointed professor in psychological and brain sciences. His research integrates experimental and theoretical approaches to elucidate the computational and biological mechanisms of visual perception. Research in his laboratory has revealed novel contributions from eye movements to spatial vision, has raised specific hypotheses on the influences of eye movements in the neural encoding of visual information and in visual development, has resulted in new methods for eye-tracking and real-time control of retinal stimulation, and has led to robots directly controlled by models of neural pathways.