skip to main content

NeuroLecture Speaker Series

FALL 2015

Pablo de Gracia

Optimizing monovision and multifocal correctionsPablo de Gracia, Barrow Neurological Institute
Dec 4 · 11:30 am • Reynolds School of Journalism room 101

In this talk we will explain how multiple-zone multifocal designs can be used to further optimize the optical performance of modified monovision corrections. Combinations of bifocal and
trifocal designs lead to higher values of optical quality (5%) and through-focus performance (35%) than designs with spherical aberration. For any given amount of optical disparity that the
presbyopic patient feels comfortable with, there is a combination of a monofocal and a bi/trifocal design that offers better optical performance than a design with spherical aberration.
Conventional monovision can be improved by using the bifocal and trifocal designs that can be implemented in laser in situ keratomileusis (LASIK) equipment and will soon be available to
the practitioner in the form of new multifocal contact and intraocular lenses.

David Peterzell  

Discovering Sensory Processes Using Individual Differences: A Review and Factor Analytic Manifesto
David Peterzell, John F. Kennedy University (College of Graduate and Professional Studies - Clinical Psychology)
Nov 20 • 11:30 am • Reynolds School of Journalism room 101

In the last century, many vision scientists have considered individual variability in data to be "error," thus overlooking a trove of systematic variability that reveals sensory, cognitive, neural and genetic processes. This "manifesto" coincides with both long-neglected and recent prescriptions of a covariance-based methodology for vision (Thurstone, 1944; Pickford, 1951; Peterzell, Werner & Kaplan, 1993; Peterzell & Teller, 1996; Kosslyn et al. 2002; Wilmer, 2008; Wilmer et al. 2012; de-Wit & Wagemans, 2015). But the emphasis here is on using small samples to both discover and confirm characteristics of visual processes, and on reanalyzing archival data. This presentation reviews 220 years of sporadic and often neglected research on normal individual variability in vision (including 25+ years of my own research). It reviews how others and I have harvested covariance to a) develop computational models of structures and processes underlying human and animal vision, b) analyze and delineate the developing visual system, c) compare typical and abnormal visual systems, d) relate visual behavior, anatomy, physiology and molecular biology, e) interrelate sensory processes and cognitive performance, and f) develop efficient (non-redundant) tests. Some examples are from my factor-analytic research on spatiotemporal, chromatic, stereoscopic, and attentional processing.

Jack Gallant
Mapping, Modeling and Decoding the Human Brain Under Naturalistic Conditions
Jack Gallant, University of California, Berkeley (Helen Wills Neuroscience Institute)
Nov 13 • 3:00 pm • Jot Travis Building room 100

One important goal of Psychology and Neuroscience is to understand the mental and neural basis of natural behavior. This is a challenging problem because natural behavior is difficult to parameterize and measure. Furthermore, natural behavior often involves many different perceptual, motor and cognitive systems that are distributed broadly across the brain. Over the past 10 years my laboratory has developed a new approach to functional brain mapping that recovers detailed information about the cortical maps mediating natural behavior. We first use functional MRI to measure brain activity while participants perform natural tasks such as watching movies or listening to stories. We then model brain activity using quantitative computational models derived from computational neuroscience or machine learning. Interpretation of the fit models reveals how many different kinds of sensory and cognitive information are represented in systematic maps distributed across the cerebral cortex. Our results show that even simple natural behaviors involve dozens or hundreds of distinct functional gradients and areas; that these are organized similarly in the brains of different individuals; and that top-down mechanisms such as attention can change these maps on a very short time scale. These statistical modeling tools provide powerful new methods for mapping the representation of many different perceptual and cognitive processes across the human brain, and for decoding brain activity.

G. Christopher Stecker

Spatial hearing and the brain
Assembling binaural information to understand auditory space
G. Christopher Stecker, Vanderbilt University School of Medicine (Department of Hearing and Speech Sciences)
Sept 25 • 11:00 am • Jot Travis Building room 100

Spatial hearing by human listeners requires access to auditory spatial cues, including interaural time differences (ITD) and interaural level differences (ILD), in the sound arriving at the two ears. For real sounds, these cues are distributed across time and frequency, and often distorted in complex ways by echoes and reverberation. Nevertheless, young normal-hearing listeners are remarkably good at localizing sounds and understanding the auditory scene, even in acoustically complex environments. In this talk, we will discuss (1) how listeners weight and combine auditory spatial cues across cue type, time, and frequency; (2) how that ability relates to the consequences of reverberation, hearing loss, and hearing-aid technology on spatial hearing; and (3) what neuroimaging with fMRI can tell us about the neural mechanisms that process auditory spatial cues and represent the auditory scene.


Changing what you see changes how you see
Analyzing the plasticity of broadband orientation perception
April Schweinhart, University of Louisville (Psychological and Brain Sciences)
Feb 25 • 11:00 am • Reynolds School of Journalism 101

Schweinhart's work using augmented reality shows that changing the way certain features are presented in an observer's environment triggers predictable changes in subsequent perception. Traditionally, vision science examined the perception of stimulus features in isolation. More recently, researchers have begun to investigate the perception of such features in context. Consider, for example, the perception of oriented structure: incoming visual signals are processed by neurons tuned in both size and orientation at the earliest cortical levels of the visual hierarchy. Interestingly, the distribution of orientations in the environment is anti-correlated with human visual perception. Though this correspondence between typical natural scene content and visual processing is compelling, until recently the relationships between visual encoding and natural scene regularities were necessarily limited to being static and correlational. This work takes into account the recent experience of the observer to determine the plasticity of perceptual biases related to environmental regularities.

John Serences

Attentional gain versus efficient selection
Evidence from human electroencephalography  
John Serences, PhD UC San Diego (Psychology)
March 5 • 4:00 pm • Ansari Business 106

Selective attention has been postulated to speed perceptual decision-making via one of three mechanisms: enhancing early sensory responses, reducing sensory noise, and improving the efficiency with which sensory information is read-out by sensorimotor and decision mechanisms (efficient selection). Here we use a combination of visual psychophysics and electroencephalography (EEG) to test these competing accounts. We show that focused attention primarily enhances the response gain of early and late stimulus-evoked potentials that peak in the contralateral posterior-occipital and central posterior electrodes, respectively. In contrast with previous reports that used fMRI, a simple model demonstrates that response enhancement alone can sufficiently account for attention-induced changes in behavior even in the absence of efficient selection. These results suggest that spatial attention facilitates perceptual decision-making primarily by increasing the response gain of stimulus-evoked responses.

Martha Merrow

The times of their lives
Developmental and circadian timing in C. elegans
Martha Merrow, PhD Ludwig Maximilians University Munich (Inst for Medical Psychology)
March 10 •  4:00 pm • Davidson Math and Science 102

Living organisms have developed a multitude of biological time-keeping mechanisms - from developmental to circadian (daily) clocks. Martha Merrow has been on the forefront of understanding the basic properties and molecular aspects of how the circadian clock synchronizes with environmental cues - from worms to yeast to fungi to humans. In addition to circadian clocks, she has been studying developmental clocks in worms and recently developed a new method to measure timing of larval development, which could be used to measure sleep-like properties in worms. She started working on biological clocks as a Post-Doctoral Fellow at the Darmouth Medical School, and is currently a Full Professor and Teaching Chair in the Institute of Medical Psychology at the Ludwig-Maximilians-Universität in Munich, Germany. Beyond her teaching and research, Martha also works on developing scientific networks for chronobiologists and for women in science.

Introspections about Visual Sensory Memory During the Classic Sperling Iconic Memory Task
Lara Krisst, San Francisco State University (Mind, Brain and Behavior Program)
March 12 • 10:00 am • Reynolds School of Journalism 101

Visual sensory memory (or ‘iconic memory') is a fleeting form of memory that has been investigated by the classic Sperling (1960) iconic memory task. Sperling demonstrated that ‘more is seen than can be remembered,' or that more information is available to observers than what they can normally report about. Sperling established the distinction between ‘whole report' (response to a stimulus set of 12 letters) and what subjects report when cued (to a row of letters in the set), or ‘partial report.' In the whole report condition participants were able to report only between three and five of the 12 letters presented, however, participants' high accuracies across partial report trials revealed that, on a given trial, the information about the complete stimulus set is held in a sensory store momentarily. This finding demonstrates subjects were able to perceive more than they were originally able to report. In a new variant of the paradigm, we investigated participants' trial-by-trial introspections about what participants are, and are not, conscious of regarding these fleeting memories. Consistent with Sperling's findings, data suggest that participants believe that they could report, identify, or remember only a subset of items (~ 4 items). Further investigation with this paradigm, including examination of the neural correlates of the introspective process, may shed light on the neural correlates of visual consciousness.

Talia Retter

At face value
An introduction to fast periodic visual stimulation
Talia Retter, Catholic University of Louvain, Belgium (Psychological Sciences Research Institute)
March 12 • 1:00 pm • Reynolds School of Journalism 101

Fast periodic visual stimulation (FPVS) is a technique in which the presentation of stimuli at a constant rate elicits a neural response at that frequency, typically recorded with electroencephalogram (EEG). A Fourier Transform is applied to the EEG data to objectively characterize this response at a pre-determined frequency of interest. Although this technique has traditionally been applied to study low-level vision, it has recently been developed to implicitly measure high-level processes in the field of face perception. In the Face Categorization Lab at the University of Louvain, FPVS has been used to study individualization of facial identities (e.g., Liu-Shuang et al., 2014) and the discrimination of faces from other object categories (e.g., Rossion et al., 2015). During my time in this lab, I have tested experiments using FPVS regarding: 1) category-selective responses to natural face and non-face images; 2) examining the spatio-temporal dynamics of face-selective responses; and 3) adaptation to a specific facial identity. The results of these studies will be discussed both in light of their implications for our understanding of face perception and, more generally, as examples of the richness of this methodology for understanding high-level vision in humans.

Libby Huber

Auditory perception and cortical plasticity after long-term blindness
Libby Huber, University of Washington (Vision and Cognition Group)
March 24 • 1:00 pm • Reynolds School of Journalism 101

Early onset blindness is associated with enhanced auditory abilities, as well as plasticity within auditory and occipital cortex. In particular, pitch discrimination is found to be superior among early-blind individuals, although the neural basis of this enhancement is unclear. In this talk, I will present recent work suggesting that blindness results in an increased representation of behaviorally relevant acoustic frequencies within both auditory and occipital cortex. Moreover, we find that individual differences in pitch discrimination performance can be predicted from the cortical data. The functional significance of group and individual level differences in frequency representation will be discussed, along with the relative importance of auditory and occipital cortical responses for acoustic frequency discrimination after long-term blindness.

Nancy Xu

New Tools for Real-time Imaging of Single Live Cells 
Nancy Xu, PhD Old Dominion University (Chemistry and Biochemistry)
April 30  •  1:00 pm  •  Davidson Math and Science 105

Current technologies are unable to real-time detect, image and study multiple types of molecules in single live cells with sufficient spatial and temporal resolutions and over an extended period of time. To better understand the cellular function in real-time, we have developed several new ultrasensitive nanobiotechnologies, including far-field photostable-optical-nanoscopy (PHOTON), photostable single-molecule-nanoparticle-optical-biosensors (SMNOBS) and single nanoparticle spectroscopy for mapping of dynamic cascades of membrane transport and signaling transduction pathways of single live cells in real time at single molecule and nanometer resolutions. We have demonstrated that these powerful new tools can be used to quantitatively image single molecules and study their functions in single live cells with superior temporal and spatial resolutions and to address a wide range of crucial biochemical and biomedical questions. The research results and experimental designs will be discussed in this seminar.

Noelle L'Etoile

Endogenous RNAi and behavior in C. elegans
Noelle L’Etoile, PhD UCSF (Department of Cell and Tissue Biology)
April 30  •  4:00 pm  •  Ansari Business 106

My group's goal is to understand how molecules, cells, circuits and the physiology of an intact organism work together to produce learned and inherited behaviors. We combine the powerful genetics and accessible cell biology with the robust behaviors of the nematode C. elegans to approach this question. I will discuss our findings that within the sensory neuron small endogenous RNAs (endo-siRNAs) provide some of the plasticity of the olfactory response. The biogenesis of these small RNAs is as mysterious as their regulation by experience and I will describe our attempts to understand each process. Within the circuit, I will touch upon how we are examining synaptic remodeling in development and in the adult animal as it adapts to novel stimuli and metabolic stress. The optical transparency of C. elegans provides a unique window into the real time dynamics of circuits. To take advantage of this, we are developing visual reporters for simultaneous imaging of several aspects of neuronal physiology: calcium transients, pH fluctuations, cGMP and cAMP fluxes and chromatin dynamics within the entire nervous system of the living, behaving animal. I will also present some of our recent findings that may link experience to inherited behaviors.

Understanding kids who don’t talk
Using EEG to measure language in minimally verbal children with ASD
Charlotte DiStefano, PhD UCLA (Center for Autism Research and Treatment)
May 8  •  4:00 pm •  Mathewson - IGT Knowledge Center 107

Approximately 30% of children with autism spectrum disorder (ASD) remain minimally verbal past early childhood. These children may have no language at all, or may use a small set of words and fixed phrases in limited contexts. Although very impaired in expressive language, minimally verbal children with ASD may present with significant heterogeneity in receptive language and other cognitive skills. Accurately measuring these skills presents a challenge, due to the limitations in how well these children are able to understand and comply with assessment instructions. Recently, there has been increased interest in using passive, or implicit measures when studying such populations, since they do not require the child to make overt responses or even understand the task. One such measure is electroencephalography (EEG), which records electrical activity within the brain and provides information about processing in real-time. EEG recordings can also be used to evaluate event related potentials (ERP), which are measurements of the brain's electrical activity in response to a specific stimulus (such as a word or a picture). We can then use this information to understand more about an individual's cognitive development, improving our ability to develop targeted interventions. We have so far collected EEG and ERP measures in minimally verbal children with ASD across a variety of domains, including resting state, visual statistical learning, face processing, word segmentation and lexical processing. This data, along with careful behavioral assessments, have led us to a greater understanding of the heterogeneity within the minimally verbal group, as well as how they differ from verbal children with ASD and typically developing children.

Brain plasticity underlying sight deprivation and restoration: A complex interplay
Olivier Collignon, PhD
University of Tento, Italy (Center for Mind/Brain Sciences)
May 22 • 11:00 am • Reynolds School of Journalism 101

Neuroimaging studies involving blind individuals have the potential to shed new light on the old ‘nature versus nurture' debate on brain development: while the recruitment of occipital (visual) regions by non-visual inputs in blind individuals highlights the ability of the brain to remodel itself due to experience (nurture influence), the observation of specialized cognitive modules in the reorganized occipital cortex of the blinds, similar to those observed in the sighted, highlights the intrinsic constraints imposed to such plasticity (nature influence). In the first part of my talk, I will present novel findings demonstrating how early blindness induces large-scale imbalance between the sensory systems involved in the processing of auditory motion.

These reorganizations in the occipital cortex of blind individuals raise crucial challenges for sight-restoration. Recently, we had the unique opportunity to track the behavioral and neurophysiological changes taking place in the occipital cortex of an early and severely visually impaired patient before as well as 1.5 and 7 months after sight restoration. An in-deep study of this exceptional patient highlighted the dynamic nature of the occipital cortex facing visual deprivation and restoration. Finally, I will present some data demonstrating that even a short and transient period of visual deprivation (only few weeks) during the early sensitive period of brain development leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision, even years after visual inputs.

Bruno Rossion

Understanding face perception with fast periodic visual stimulation
Bruno Rossion, PhD
Catholic University of Louvain, Belgium (Psychological Sciences Research Institute)
May 26 • 1:00 pm • Reynolds School of Journalism 101

When the human brain is stimulated at a rapid periodic frequency rate, it synchronizes its activity exactly to this frequency, leading to periodic responses recorded by the electroencephalogram (EEG). In vision, periodic stimulation has been used essentially to investigate low-level processes and attention, and has been recently extended to understand high-level visual processes, in particular face perception (Rossion & Boremanse, 2011). In this presentation, I will summarize a series of studies carried out over the last few years that illustrate the strengths of this approach: the objective (i.e., exactly at the experimentally-defined frequency rate) definition of neural activity related to face perception, the very high signal-to-noise ratio, the independence from explicit behavioral responses, and the identification of perceptual integration markers. Overall, fast periodic visual stimulation is a highly valuable approach to understand the sensitivity to visual features of complex visual stimuli and their integration, in particular for individual faces, and in populations presenting a lower sensitivity of their brain responses and/or the need for rapid and objective assessment without behavioral explicit responses (e.g., infants and children, clinical populations, animals).

Take the next step...