Speaker Erin Goddard

Erin Goddard, Ph.D.

Department of Psychology, UNSW, Sydney

Friday, May 26, 2023 | 11 a.m. | Reynolds School of Journalism Room 101


Visual feature binding and color constancy: similar processes?

Different regions in the visual cortex show specialization for encoding different visual features, giving rise to the question of how these features are 'bound together' to create a unitary percept of each object. I will show recent work based on the classification of magnetoencephalography (MEG) data where we found that information about separate color and shape information preceded information about their conjunction in the occipital cortex. This may reflect feedback to occipital regions being required for feature binding, which would be consistent with a role for attention, as suggested by behavioral work (e.g. visual search). While this 'binding problem' has been investigated for at least 30 years, more recently I investigated whether a process simi­lar to feature binding might be involved in the separation of surface and illuminant properties in color constancy. I will present behavioral work where we tested this idea, and found that, like feature binding, the perceptual separation of surface and illuminant properties appears to rely on a slower, limited capacity process.

Speaker Alice O'Toole

Alice O'Toole, Ph.D.

Department of Psychology, University of Texas, Dallas

Friday, April 28, 2023 | 11 a.m. | Reynolds School of Journalism Room 101


Dissecting face representations in deep convolutional neural networks: A study in closing the gap between single-unit and neural population codes

Deep learning models currently achieve human levels of performance on real-world face recognition tasks. The face representations created by these networks have surprising properties that can offer insight into how the visual system might represent facial identity. My talk is organized around three fundamental advances in our understanding of how deep networks achieve a "neurally grounded" solution to the problem of face recognition. First, deep networks trained for face identification generate a representation that retains structured information about the face (e.g., identity, demographics, appearance, social traits, expression) and the input image (e.g., viewpoint, illumination). This forces us to rethink the universe of possible solutions to the problem of inverse optics in vision. Second, deep learning models indicate that high-level visual representations of faces cannot be understood in terms of interpretable features. This has implications for understanding neural tuning and population coding in the high-level visual cortex. Third, the "activation level" of neurons at the "top" of deep neural networks separate images by identity. Activation level, therefore, cannot be considered as an indication of feature detection. This suggests that high response rates in networks (and by analogy, high-level visual cortex) may provide little or no information about the function of the neuron. In combination, these results suggest a reevaluation of fundamental assumptions in visual neuroscience.

Speaker Rain Bosworth

Rain Bosworth, Ph.D.

Department of Liberal Studies,  Rochester Institute of Technology/National Technical Institute for the Deaf

November 18, 2022 | 1 p.m., as part of the 12th Annual Meeting, Sierra Nevada Chapter of the Society for Neuroscience 12:30 to 6:15 p.m. | Pennington Health Sciences Building Auditorium 102


Language, gesture and other communicative signals are biologically privileged starting at infancy and continue to have profound effects on human cognition later in life. Yet most of what we know about how language experience impacts cognition comes from work with spoken language. I present findings from my lab where we measure gaze patterns of infants, children, and adults who are either English speakers or signers of American Sign Language, as they watch a range of signals, including fingerspelling, isolated signs, signed narratives, gestures, and distorted nonsense signs. These find­ings provide empirical evidence for a potential for communication in the visual (rather than acoustic) modality that arises very early in life and is observed even in hearing non-signing infants. Our results also pinpoint perceptually salient cues that transcend sensory modality that exist in both spoken and signed languages.

The functional architecture of human visual cortex

Speaker Mark Greenlee

Mark W. Greenlee, Ph.D.,

Institute for Psychology, University of Regensburg

November 18, 2022 | 4:40 p.m., as part of the 12th Annual Meeting, Sierra Nevada Chapter of the Society for Neuroscience 12:30 - 6:15 p.m. | Pennington Health Sciences Building Auditorium 102


Self-motion perception involves a network of cortical vestibular and visual brain regions, including the parieto-insular vestibular cortex (PIVC) and the posterior insular cortex (PIC) located in the lateral cortex. In the medial cortex, the cingulate sulcus visual (CSv) area has been found to process visual-vestibular cues. Here, we report evidence suggest­ing that the visual-vestibular network of the medial cortex extends beyond area CSv. We examined brain activation in the medial cortex of 36 healthy right-handed participants by means of functional magnetic resonance imaging (fMRI) during stimulation with visual motion, caloric vestibular, and thermal (i.e., stimulation of the pinna) cues. We found that area CSv and V6 responded to both visual and vestibular cues but not to thermal cues. In addition, we found a region inferior to CSv within the pericallosal sulcus (vicinity of anterior retrosplenial cortex) that primarily responded to caloric vestibular cues and which was distinct in terms of its location from other known areas of the medial vestibular cortex. This 'pericallosal' vestibular region did not respond to either visual or thermal cues. It was also distinct from another retrosplenial region that responded to visual motion cues. Together, our results suggest that the visual­ vestibular network in the medial cortex not only includes areas CSv and V6, but two additional brain regions adjacent to the callosum. These two brain regions exhibit similarities in terms of their locations and responses to visual and vestibular cues with homologue brain regions recently described in non-human primates.

Acknowledgements: This work was conducted in collaboration with Anton L. Beer, Markus Becker and Sebastian M. Frank at the lnstitut fUr Psychologie, Universitat Regensburg.