NeuroLecture Speaker Series
Steven Shevell, University of Chicago (Psychology)
Feb 26 • 11:30 am • Reynolds School of Journalism 101
Our ability to see in the natural world depends on the neural representations of objects. Signals sent from the eye to the brain are the basis for what we see, but these signals must be transformed from the image-based representation of light in the eye to an object-based representation of edges and surfaces. A challenge for understanding this transformation is the ambiguous nature of the image-based representation from the eye. Textbooks examples demonstrate this ambiguity using a constant retinal image that causes fluctuation between two different bi-stable percepts (as in the face-or-vase illusion, or a Necker cube that switches between two orientations). Bi-stable colors also can be experienced with ambiguous chromatic neural representations. Recent experiments (1) generate ambiguous chromatic neural representations that result in perceptual bi-stability alternating between two colors, (2) reveal that two or more distinct objects in view, each with its own ambiguous chromatic representation, often have the same color, which reveals that grouping is a key aspect of resolving chromatic ambiguity, and (3) show that grouping survives even with unequal temporal properties among the separate ambiguous representations, as predicted by a model of binocularly integrated visual competition.
Understanding Person Recognition; Psychological, Computational, & Neural Perspectives
Alice O'Toole, University of Texas at Dallas (School of Behavioral and Brain Sciences)
Feb 19 • 11:30 am • Reynolds School of Journalism 101
Over the past decade, face recognition algorithms have shown impressive gains in performance, operating under increasingly unconstrained imaging conditions. It is now commonplace to benchmark the performance of face recognition algorithms against humans and to find conditions under which the machines perform more accurately than humans. I will present a synopsis of human-machine comparisons that we have conducted over the past decade, in conjunction with U.S. Government-sponsored competitions for computer-based face recognition systems. From these comparisons, we have learned much about human face recognition, and even more about person recognition. These experiments have led us to examine the neural responses in face- and body-selective cortical areas during person recognition in natural viewing conditions. I will describe the neuroimaging findings and conclude that human expertise for "face recognition" is better understood in the context of the whole person in motion-where the body and gait provide valuable identity information that supplements the face in poor viewing conditions.
Color Naming, Color Communication and the Evolution of Basic Color Terms
Delwin Lindsey, Ohio State University (Psychology)
Feb 19 • 12:30 pm • Reynolds School of Journalism 101
The study of the language of color is implicitly based on the existence of a shared mental representation of color within a culture. Berlin & Kay (1969) proposed that the great cross-cultural diversity in color naming occurs because different languages are at different stages along a constrained trajectory of color term evolution. However, most pre-industrial societies show striking individual differences in color naming (Lindsey & Brown, 2006, 2009). We argue that within-language diversity is not entirely lexical noise. Rather, it suggests a fundamental mechanism for color lexicon change. First, the diverse color categories---including some that do not conform to classical universal categories---observed within one society are often similar to those seen in people living in distant societies, on different continents, and speaking completely unrelated languages. Second, within-culture consensus is often low, either due to synonymy or to variation in the number and/or structure of color categories. Next, we introduce an information-theoretic analysis based on mutual information, and analyze within-culture communication efficiency across cultures. Color communication in Hadzane, Somali, and English provides insight into the structure of the lexical signals and noise in world languages (Lindsey et al., 2015). These three lines of evidence suggest a new view of color term evolution. We argue that modern lexicons evolved, under the guidance of universal perceptual constraints, from initially sparse (Levinson, 2000), distributed representations that mediate color communication poorly, to more complete representations, with high consensus color naming systems capable of mediating better color communication within the language community.
Critical Immaturities Limiting Infant Visual Sensititvity
Angela Brown, Ohio State University (Optometry)
Feb 19 • 1:00 pm • Reynolds School of Journalism 101
The vision of the human infant is remarkably immature: visual sensitivity to light is low, contrast sensitivity is poor, visual acuity is poor, color vision is poor, vernier acuity is poor, and stereopsis is probably not possible until the infant is several months old. The visual system of the human infant is known to be biologically immature as well: the photoreceptors, especially the foveal cones, are morphologically immature, and myelination of the ascending visual pathway is not complete at birth. Also, the infant is cognitively immature, for example the infant attention span is short. In this talk, I will unite these immaturities into a single picture of the infant visual system: the main critical immaturity that limits infant visual performance on these psychophysical tasks is a large amount of contrast-like noise that is added linearly to the visual signal, after the sites of visual light adaptation, but before the sites of visual contrast adaptation, and likely in the retina or ascending visual pathway.