skip to main content

Past NeuroLecture Speakers

Spring 2016

  1. Impact of Distorted Optics on Spatial and Depth Vision - Lessons from Human Disease Models
    Shrikant Bharadwaj (LV Prasad Eye Institute)

  2. Scotopic Vision and Aging
    Megan Tillman (UC Davis)

  3. Synchronization of Circadian Clocks to Daily Environmental Cycles
    Patrick Emery (University of Massachusetts Medical School)

  4. Roles of Cortical Single- and Double-Opponent Cells in Color Vision
    Robert Shapley (New York University)

  5. Pulse Trains to Percepts: The Challenge of Creating a Perceptually Intelligible World with Sight Recovery Techniques
    Ione Fine (University of Washington)

  6. Color Vision in the Peripheral Retina
    Vicki Volbrecht (Colorado State University)

  7. Transcriptional Regulation of Heart Development and Chromatin Structure
    Benoit Bruneau (UCSF)

  8. Health Law Implications of Advances in Neuroscience, Including Neuroimaging
    Stacey Tovino (UNLV)

  9. Perceptual Resolution of Color with Ambiguous Chromatic Neural Representations
    Steven Shevell (University of Chicago)

  10. Understanding Person Recognition: Psychological, Computational, & Neural Perspectives
    Alice O'Toole (University of Texas at Dallas)

  11. Color Naming, Color Communication and the Evolution of Basic Color Terms
    Delwin Lindsey (Ohio State University)

  12. Critical Immaturities Limiting Infant Visual Sensitivity
    Angela Brown (Ohio State University)

Fall 2015

  1. Neural Mechanisms of Distractor Suppression

    Steve Luck (UC Davis)

  2. Interplay between posttranslational modifications regulate animal circadian clock

    Joanna Chiu (UC Davis)

  3. Optimizing monovision and multifocal corrections
    Pablo de Gracia (Barrow Neurological Institute)

  4. Discovering Sensory Processes Using Individual Differences: A Review and Factor Analytic Manifesto
    David Peterzell (John F. Kennedy University)

  5. Mapping, Modeling and Decoding the Human Brain Under Naturalistic Conditions
    Jack Gallant (UC Berkeley)

  6. Spatial hearing and the brain: Assembling binaural information to understand auditory space
    G. Christopher Stecker (Vanderbilt University)

Spring 2015

  1. Understanding face perception with fast periodic visual stimulation
    Bruno Rossion (Catholic University of Louvain)

  2. Brain plasticity underlying sight deprivation and restoration: A complex interplay
    Olivier Collignon (University of Trento)

  3. Understanding kids who don’t talk: Using EEG to measure language in minimally verbal children with ASD
    Charlotte DiStefano (UCLA)

  4. Endogenous RNAi and behavior in C. elegans
    Noelle L'Etoile (UCSF)

  5. New Tools for Real-time Imaging of Single Live Cells
    Nancy Xu (Old Dominion University)

  6. Auditory perception and cortical plasticity after long-term blindness
    Libby Huber (University of Washington)

  7. At face value: An introduction to fast periodic visual stimulation
    Talia Retter (Catholic University of Louvain)

  8. Introspections about Visual Sensory Memory During the Classic Sperling Iconic Memory Task
    Lara Krisst (San Francisco State)

  9. The times of their lives: Developmental and circadian timing in C. elegans
    Martha Merrow (Ludwig Maximilians University Munich)

  10. Attentional gain versus efficient selection
    John Serences (UC San Diego)

  11. Changing what you see changes how you see: Analyzing the plasticity of broadband orientation perception
    April Schweinhart (University of Louisville)

Fall 2014

  1. Optical deconstruction of fully-assembled biological systems
    Karl Deisseroth (Stanford)

  2. Cuttlefish Camouflage
    Charlie Chubb (UC Irvine)

  3. Cultural Neuroscience: Current Evidence and Future Prospect
    Shinobu Kitayama (University of Michigan)

  4. Human echolocation: How the blind use tongue-clicks to navigate the world
    Mel Goodale, (The Brain and Mind Institute) and Brian Bushway  (World Access for the Blind)

  5. Chunking of visual features in space and time: Behavioral and neuronal mechanisms
    Peter Tse (Dartmouth)

  6. Building a Vision: Shared Multimodal Pediatric fNIRS Brain Imaging Facility at the University of Michigan
    Ioulia Kovelman (University of Michigan)

  7. Using the worm to catch Z's: somnogen discovery in C. elegans
    David Raizen (University of Pennsylvania)

Fall 2013


    1. Introduction to Functional Near-Infrared Spectroscopy (fNIRS)

      Theodore Huppert (University of Pittsburgh)

    2. Illuminating the Mind: Applications and Challenges for fNIRS

  1. Understanding Migraine: Genetics, Epigenetics and Receptor Sensitivity
    John Rothrock (Renown Institute Neurosciences)

  2. Cell cycle genes repurposed as sleep factors
    Dragana Rogulja (Harvard Medical School)

  3.  

    1. HD-EEG Analysis Workshop
      Alison Harris (Claremont McKenna College)

    2. Event-related brain dynamics of value and decision-making


SPRING 2016

Shrikant Bharadwaj

Impact of Distorted Optics on Spatial and Depth Vision - Lessons from Human Disease Models
Shrikant Bharadwaj, LV Prasad Eye Institute (Optometry and Vision Sciences)
May 6 • 12:30 pm • Reynolds School of Journalism 101

The process of "seeing" involves the capture of light photons by the anatomical substrate called "eye" and processing these photons using a black box called "brain". The output of this black box manifests itself as perception including form, depth, motion and color and as motor actions including eye movements of various sorts. As vision scientists, our goal is to understand the functioning of the black box by systematically studying perception and motor action in response to known experimental manipulations. As clinicians, we use the output of the black box as a measure of how well the individual is seeing and interacting with the world around them. Given that the "eye" forms the substrate for perception by the "brain", it is conceivable that properties of the "eye" - optics, in this case - will have a significant impact on the quality of perception. The clinic presents several interesting scenarios where the optics of the eye are distorted but their impact on perception is not systematically understood thus far. In my talk, I will use two examples to illustrate this issue. In both cases, the eye is heavily distorted and there may also be significant interocular difference in distortions depending on disease presentation. What impact do these distortions have on spatial and depth perception and how does modifying the optics using rigid contact lenses impact perception will be the focus of my presentation.


Megan Tillman

Scotopic Vision and Aging
Megan Tillman, UC Davis (Neuroscience)
May 4 • 3:00 pm • Reynolds School of Journalism 101

Poor night vision is a common complaint amongst the elderly, however, the cause of this impairment is not straightforward. Many factors are thought to contribute to the age-related decline in scotopic sensitivity, including reduced pupil size, increased optical density of the ocular media, rod photoreceptor death, and delayed photopigment kinetics. The goal of the current research was to measure the retinal  activity of normal younger and older adults using the electroretinogram (ERG) and control for optical changes (i.e., pupil size and ocular media density) so as to determine a neural contribution for the age-related scotopic vision loss. Both full-field and multifocal ERGs were recorded in order to understand the global and topographical changes, respectively, in the rod-mediated retina of older adults.


Patrick Emery

Synchronization of Circadian Clocks to Daily Environmental Cycles
Patrick Emery, University of Massachusetts Medical School (Neurobiology)
April 21 • 4:00 pm • Davidson Math & Science 105

Circadian clocks play the critical role of optimizing most bodily functions - from basic metabolism to complex behaviors - with the time of day. These timekeepers need to be synchronized with the environment to be helpful.  Therefore, they are able to respond to multiple inputs, such as light and temperature. Interestingly, in Drosophila, these two environmental cues can be detected in a cell-autonomous manner. The seminar will focus on these cell-autonomous photic and thermal sensing mechanisms, and how they converge on a single pacemaker protein, TIMELESS, to entrain together circadian clocks. 


 

Robert Shapley

Roles of Cortical Single- and Double-Opponent Cells in Color Vision
Robert Shapley, New York University (Center for Neural Science)
April 15 • 3:00 pm • William J. Raggio Building 2030

Color and form interact in visual perception. We will consider the neural mechanisms in the visual cortex that are the basis for color-form interactions.


Ione Fine

Pulse Trains to Percepts: The Challenge of Creating a Perceptually Intelligible World with Sight Recovery Techniques
Ione Fine, University of Washington (Psychology)
April 8 • 11:30 am • Reynolds School of Journalism 101

An extraordinary variety of sight recovery therapies are either about to begin clinical trials, have begun clinical trials, or are currently being implanted in patients. However, as yet we have little insight into the perceptual experience likely to be produced by these implants. This review focuses on methodologies, such as optogenetics, small molecule photoswitches and electrical prostheses, which use artificial stimulation of the retina to elicit percepts. For each of these technologies, the interplay between the stimulating technology and the underlying neurophysiology is likely to result in distortions of the perceptual experience. Here, we simulate some of these potential distortions and discuss how they might be minimized either through changes in the encoding model or through cortical plasticity.


Vicki Volbrecht

Color Vision in the Peripheral Retina
Vicki Volbrecht, Colorado State University (Psychology)
April 1 • 3:00 pm • William J. Raggio Building 2030

The study of peripheral color vision presents challenges due to the presence of both rods and cones in the peripheral retina. As many studies have shown, color perception in the peripheral retina differs from color perception in the fovea; and color perception also varies across the peripheral retina with retinal eccentricity and location. These differences are not surprising due to differences in the retinal mosaic with retinal eccentricity and location. Despite these differences, though, when viewing objects in everyday life that cover both the foveal and peripheral retina, we perceive a uniform color percept. Similarly, when an object is viewed binocularly along the horizontal meridian in the peripheral retina, the color percept is uniform even though the object falls on the temporal retina of one eye and the nasal retina of the other eye; viewed monocularly, the color of the object may differ if it falls on the temporal or nasal retina. Recently, our laboratory has been investigating some of these issues as it relates to peripheral color vision and uniform color percepts across the differing retinal mosaic.


Benoit Bruneau

Transcriptional Regulation of Heart Development and Chromatin Structure
Benoit Bruneau, UCSF (Gladstone Institute of Cardiovascular Disease)
March 10 • 4:00 pm • William J. Raggio Building 3005

Complex networks of transcription factors regulate cardiac cell fate and morphogenesis, and dominant mutations in transcription factor genes lead to most instances of inherited congenital heart defects (CHDs). The mechanisms underlying CHDs that result from these mutations is not known, but regulation of gene expression within a relatively narrow developmental window is clearly essential for normal cardiac morphogenesis. We have detailed the interactions between CHD-associated transcription factors, their interdependence in regulating cardiac gene expression and morphogenesis, and their function in establishing early cardiac lineage  boundaries that are disrupted in CHD. We have also delineated an essential role by CTCF in regulating genome-wide three-dimensional chromatin organization.


Stacey Tovino

Health Law Implications of Advances in Neuroscience, Including Neuroimaging
Stacey Tovino, University of Nevada, Las Vegas (William S. Boyd School of Law)
March 4 • 3:00 pm • Mathewson-IGT Knowledge Center 124

Within the overlapping fields of neurolaw and neuroethics, scholars have given significant attention to the implications of advances in neuroscience for issues in criminal law, criminal   procedure, constitutional law, law and religion, tort law, evidence law, confidentiality and privacy law, protection of human subjects, and even the regulation of neuroscience-based    technologies. Less attention has been paid, however, to the implications of advances in neuroscience for more traditional civil and regulatory health law issues. In this presentation, I will examine the ways in which neuroscience impacts four different areas within civil and regulatory health law, including mental health parity law and mandatory mental health and substance use disorder law, public and private disability benefit law, disability discrimination law, and professional discipline. In some areas, especially mental health parity law and  mandatory mental health and substance use disorder benefit law, advances in neuroscience have positively impacted health insurance coverage. In other areas, including disability discrimination law, the impact has not been as significant.


Steven Shevell

Perceptual Resolution of Color with Ambiguous Chromatic Neural Representations
Steven Shevell, University of Chicago (Psychology)
Feb 26 • 11:30 am • Reynolds School of Journalism 101

Our ability to see in the natural world depends on the neural representations of objects. Signals sent from the eye to the brain are the basis for what we see, but these signals must be transformed from the image-based representation of light in the eye to an object-based representation of edges and surfaces. A challenge for understanding this transformation is the ambiguous nature of the image-based representation from the eye. Textbooks examples demonstrate this ambiguity using a constant retinal image that causes fluctuation between two different bi-stable percepts (as in the face-or-vase illusion, or a Necker cube that switches between two orientations). Bi-stable colors also can be experienced with ambiguous chromatic neural representations. Recent experiments (1) generate ambiguous chromatic neural representations that result in perceptual bi-stability alternating between two colors,  (2) reveal that two or more distinct objects in view, each with its own ambiguous chromatic representation, often have the same color, which reveals that grouping is a key aspect of resolving chromatic ambiguity, and (3) show that grouping survives even with unequal temporal properties among the separate ambiguous representations, as predicted by a model of binocularly integrated visual competition.


Alice O'Toole

Understanding Person Recognition; Psychological, Computational, & Neural Perspectives
Alice O'Toole, University of Texas at Dallas (School of Behavioral and Brain Sciences)
Feb 19 • 11:30 am • Reynolds School of Journalism 101

Over the past decade, face recognition algorithms have shown impressive gains in performance, operating under increasingly unconstrained imaging conditions. It is now commonplace to benchmark the performance of face recognition algorithms against humans and to find conditions under which the machines perform more accurately than humans. I will present a synopsis of human-machine comparisons that we have conducted over the past decade, in conjunction with U.S. Government-sponsored competitions for computer-based face recognition systems. From these comparisons, we have learned much about human face recognition, and even more about person recognition. These experiments have led us to examine the neural responses in face- and body-selective cortical areas during person recognition in natural viewing conditions.  I will describe the neuroimaging findings and conclude that human expertise for "face recognition" is better understood in the context of the whole person in motion-where the body and gait provide valuable identity information that supplements the face in poor viewing conditions.


Delwin Lindsey

Color Naming, Color Communication and the Evolution of Basic Color Terms
Delwin Lindsey, Ohio State University (Psychology)
Feb 19 • 12:30 pm • Reynolds School of Journalism 101

The study of the language of color is implicitly based on the existence of a shared mental representation of color within a culture. Berlin & Kay (1969) proposed that the great cross-cultural diversity in color naming occurs because different languages are at different stages along a constrained trajectory of color term evolution. However, most pre-industrial societies show striking individual differences in color naming (Lindsey & Brown, 2006, 2009). We argue that within-language diversity is not entirely lexical noise. Rather, it suggests a fundamental mechanism for color lexicon change. First, the diverse color categories---including some that do not conform to classical universal categories---observed within one society are often similar to those seen in people living in distant societies, on different continents, and speaking completely unrelated languages. Second, within-culture consensus is often low, either due to synonymy or to variation in the number and/or structure of color categories. Next, we introduce an information-theoretic analysis based on mutual information, and analyze within-culture communication efficiency across cultures. Color communication in Hadzane, Somali, and English provides insight into the structure of the lexical signals and noise in world languages (Lindsey et al., 2015). These three lines of evidence suggest a new view of color term evolution. We argue that modern lexicons evolved, under the guidance of   universal perceptual constraints, from initially sparse (Levinson, 2000), distributed representations that mediate color   communication poorly, to more complete representations, with high consensus color naming systems capable of mediating better color communication within the language community.


Angela Brown

Critical Immaturities Limiting Infant Visual Sensitivity
Angela Brown, Ohio State University (Optometry)
Feb 19 • 1:00 pm • Reynolds School of Journalism 101

The vision of the human infant is remarkably immature: visual sensitivity to light is low, contrast sensitivity is poor, visual acuity is poor, color vision is poor, vernier acuity is poor, and stereopsis is probably not possible until the infant is several months old. The visual system of the human infant is known to be biologically immature as well: the photoreceptors, especially the foveal cones, are morphologically immature, and myelination of the ascending visual pathway is not complete at birth. Also, the infant is cognitively immature, for example the infant attention span is short. In this talk, I will unite these immaturities into a single picture of the infant visual system: the main critical immaturity that limits infant visual performance on these psychophysical tasks is a large amount of contrast-like noise that is added linearly to the visual signal, after the sites of visual light adaptation, but before the sites of visual contrast adaptation, and likely in the retina or ascending visual pathway.


Fall 2015

Steve Luck

Neural Mechanisms of Distractor Suppression
Steve Luck (UC Davis, Center for Mind & Brain)
December 10 • 12:00 pm • Joe Crowley Student Union Theater

The natural visual input is enormously complex, and mechanisms of attention are used to focus neural processing on a subset of the input at any given time. But how does the brain decide which inputs to process and which to ignore? Some researchers have proposed that bottom-up salience is initially used to control attention, with top-down control emerging gradually over time. Others have proposed that top-down, prefrontal control mechanisms are completely responsible for the guidance of attention, with no role for bottom-up salience. In this talk, I will describe recent electrophysiological and psychophysical studies that support a hybrid theory, in which bottom-up salience signals are present but can be actively suppressed by a specialized neural mechanism before they can capture attention. This same mechanism also appears to be used to terminate the orienting of attention to an object after perceptual processing of that object is complete.


Joanna Chiu

Interplay between posttranslational modifications regulate animal circadian clock
Joanna Chiu (UC Davis, Department of Entomology and Nematology)
December 10 • 3:00 pm • Davidson Math and Science Center 103

Circadian clocks regulate molecular oscillations that manifest into physiological and behavioral rhythms in all kingdoms of life. A long-term goal of my laboratory is to dissect the molecular network and cellular mechanisms that control the circadian oscillator in animals, and investigate how this molecular oscillator interacts with the environment and cellular metabolism to drive rhythms of physiology and behavior. Given the similarities in design principle of circadian oscillators across kingdoms, the knowledge gained from studies using Drosophila melanogaster as a model will lead to a better universal understanding of circadian oscillator function and properties. In this presentation, I will discuss the contribution of protein posttranslational modifications (PTMs) in regulating circadian rhythm by focusing on analyzing PTMs of key transcription factors such as PERIOD (PER), a key biochemical timer of clockwork. My laboratory has recently optimized the PTM profiling of circadian proteins in vivo using affinity purification and mass spectrometry. This breakthrough allows us to follow the temporal multi-site PTM program of PER and other clock proteins in vivo at physiological conditions throughout the circadian day in a high throughput and quantitative manner, and sets the stage for understanding how the PTM programs of clock proteins, and hence clock function, are modulated by genetic, physiological, and environmental factors.


Pablo de Gracia

Optimizing monovision and multifocal corrections
Pablo de Gracia (Barrow Neurological Institute)
December 4 • 11:30 am • Reynolds School of Journalism 101

In this talk we will explain how multiple-zone multifocal designs can be used to further optimize the optical performance of modified monovision corrections. Combinations of bifocal and trifocal designs lead to higher values of optical quality (5%) and through-focus performance (35%) than designs with spherical aberration. For any given amount of optical disparity that the presbyopic patient feels comfortable with, there is a combination of a monofocal and a bi/trifocal design that offers better optical performance than a design with spherical aberration. Conventional monovision can be improved by using the bifocal and trifocal designs that can be implemented in laser in situ keratomileusis (LASIK) equipment and will soon be available to the practitioner in the form of new multifocal contact and intraocular lenses.


David Peterzell

Discovering Sensory Processes Using Individual Differences: A Review and Factor Analytic Manifesto
David Peterzell (John F. Kennedy University, College of Graduate and Professional Studies - Clinical Psychology)
November 20 • 11:30 am • Reynolds School of Journalism 101

In the last century, many vision scientists have considered individual variability in data to be "error," thus overlooking a trove of systematic variability that reveals sensory, cognitive, neural and genetic processes. This "manifesto" coincides with both long-neglected and recent prescriptions of a covariance-based methodology for vision (Thurstone, 1944; Pickford, 1951; Peterzell, Werner & Kaplan, 1993; Peterzell & Teller, 1996; Kosslyn et al. 2002; Wilmer, 2008; Wilmer et al. 2012; de-Wit & Wagemans, 2015). But the emphasis here is on using small samples to both discover and confirm characteristics of visual processes, and on reanalyzing archival data. This presentation reviews 220 years of sporadic and often neglected research on normal individual variability in vision (including 25+ years of my own research). It reviews how others and I have harvested covariance to a) develop computational models of structures and processes underlying human and animal vision, b) analyze and delineate the developing visual system, c) compare typical and abnormal visual systems, d) relate visual behavior, anatomy, physiology and molecular biology, e) interrelate sensory processes and cognitive performance, and f) develop efficient (non-redundant) tests. Some examples are from my factor-analytic research on spatiotemporal, chromatic, stereoscopic, and attentional processing.


Jack Gallant

Mapping, Modeling and Decoding the Human Brain Under Naturalistic Conditions
Jack Gallant, (University of California, Berkeley, Helen Wills Neuroscience Institute)
November 13 • 3:00 pm • Jot Travis 100

One important goal of Psychology and Neuroscience is to understand the mental and neural basis of natural behavior. This is a challenging problem because natural behavior is difficult to parameterize and measure. Furthermore, natural behavior often involves many different perceptual, motor and cognitive systems that are distributed broadly across the brain. Over the past 10 years my laboratory has developed a new approach to functional brain mapping that recovers detailed information about the cortical maps mediating natural behavior. We first use functional MRI to measure brain activity while participants perform natural tasks such as watching movies or listening to stories. We then model brain activity using quantitative computational models derived from computational neuroscience or machine learning. Interpretation of the fit models reveals how many different kinds of sensory and cognitive information are represented in systematic maps distributed across the cerebral cortex. Our results show that even simple natural behaviors involve dozens or hundreds of distinct functional gradients and areas; that these are organized similarly in the brains of different individuals; and that top-down mechanisms such as attention can change these maps on a very short time scale. These statistical modeling tools provide powerful new methods for mapping the representation of many different perceptual and cognitive processes across the human brain, and for decoding brain activity.


G. Christopher Stecker

Spatial hearing and the brain: Assembling binaural information to understand auditory space
G. Christopher Stecker, (Vanderbilt University School of Medicine, Department of Hearing and Speech Sciences)
September 25 • 11:00 am • Jot Travis 100

Spatial hearing by human listeners requires access to auditory spatial cues, including interaural time differences (ITD) and interaural level differences (ILD), in the sound arriving at the two ears. For real sounds, these cues are distributed across time and frequency, and often distorted in complex ways by echoes and reverberation. Nevertheless, young normal-hearing listeners are remarkably good at localizing sounds and understanding the auditory scene, even in acoustically complex environments. In this talk, we will discuss (1) how listeners weight and combine auditory spatial cues across cue type, time, and frequency; (2) how that ability relates to the consequences of reverberation, hearing loss, and hearing-aid technology on spatial hearing; and (3) what neuroimaging with fMRI can tell us about the neural mechanisms that process auditory spatial cues and represent the auditory scene.


SPRING 2015

Bruno Rossion

Understanding face perception with fast periodic visual stimulation
Bruno Rossion (Catholic University of Louvain, Belgium, Psychological Sciences Research Institute)
May 26 • 1:00 pm • Reynolds School of Journalism 101

When the human brain is stimulated at a rapid periodic frequency rate, it synchronizes its activity exactly to this frequency, leading to periodic responses recorded by the electroencephalogram (EEG). In vision, periodic stimulation has been used essentially to investigate low-level processes and attention, and has been recently extended to understand high-level visual processes, in particular face perception (Rossion & Boremanse, 2011). In this presentation, I will summarize a series of studies carried out over the last few years that illustrate the strengths of this approach: the objective (i.e., exactly at the experimentally-defined frequency rate) definition of neural activity related to face perception, the very high signal-to-noise ratio, the independence from explicit behavioral responses, and the identification of perceptual integration markers. Overall, fast periodic visual stimulation is a highly valuable approach to understand the sensitivity to visual features of complex visual stimuli and their integration, in particular for individual faces, and in populations presenting a lower sensitivity of their brain responses and/or the need for rapid and objective assessment without behavioral explicit responses (e.g., infants and children, clinical populations, animals).


Olivier Collignon

Brain plasticity underlying sight deprivation and restoration: A complex interplay
Olivier Collignon (University of Trento, Italy, Center for Mind/Brain Sciences)
May 22 • 11:00 am • Reynolds School of Journalism 101

Neuroimaging studies involving blind individuals have the potential to shed new light on the old ‘nature versus nurture' debate on brain development: while the recruitment of occipital (visual) regions by non-visual inputs in blind individuals highlights the ability of the brain to remodel itself due to experience (nurture influence), the observation of specialized cognitive modules in the reorganized occipital cortex of the blinds, similar to those observed in the sighted, highlights the intrinsic constraints imposed to such plasticity (nature influence). In the first part of my talk, I will present novel findings demonstrating how early blindness induces large-scale imbalance between the sensory systems involved in the processing of auditory motion.

These reorganizations in the occipital cortex of blind individuals raise crucial challenges for sight-restoration. Recently, we had the unique opportunity to track the behavioral and neurophysiological changes taking place in the occipital cortex of an early and severely visually impaired patient before as well as 1.5 and 7 months after sight restoration. An in-deep study of this exceptional patient highlighted the dynamic nature of the occipital cortex facing visual deprivation and restoration. Finally, I will present some data demonstrating that even a short and transient period of visual deprivation (only few weeks) during the early sensitive period of brain development leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision, even years after visual inputs.


Charlotte DiStefano

Understanding kids who don’t talk: Using EEG to measure language in minimally verbal children with ASD
Charlotte DiStefano (UCLA, Center for Autism Research and Treatment)
May 8 • 4:00 pm • Mathewson-IGT Knowledge Center 107

Approximately 30% of children with autism spectrum disorder (ASD) remain minimally verbal past early childhood. These children may have no language at all, or may use a small set of words and fixed phrases in limited contexts. Although very impaired in expressive language, minimally verbal children with ASD may present with significant heterogeneity in receptive language and other cognitive skills. Accurately measuring these skills presents a challenge, due to the limitations in how well these children are able to understand and comply with assessment instructions. Recently, there has been increased interest in using passive, or implicit measures when studying such populations, since they do not require the child to make overt responses or even understand the task. One such measure is electroencephalography (EEG), which records electrical activity within the brain and provides information about processing in real-time. EEG recordings can also be used to evaluate event related potentials (ERP), which are measurements of the brain's electrical activity in response to a specific stimulus (such as a word or a picture). We can then use this information to understand more about an individual's cognitive development, improving our ability to develop targeted interventions. We have so far collected EEG and ERP measures in minimally verbal children with ASD across a variety of domains, including resting state, visual statistical learning, face processing, word segmentation and lexical processing. This data, along with careful behavioral assessments, have led us to a greater understanding of the heterogeneity within the minimally verbal group, as well as how they differ from verbal children with ASD and typically developing children.


Noelle L'Etoile

Endogenous RNAi and behavior in C. elegans
Noelle L’Etoile (UCSF, Department of Cell and Tissue Biology)
April 30 • 4:00 pm • Ansari Business 106

My group's goal is to understand how molecules, cells, circuits and the physiology of an intact organism work together to produce learned and inherited behaviors. We combine the powerful genetics and accessible cell biology with the robust behaviors of the nematode C. elegans to approach this question. I will discuss our findings that within the sensory neuron small endogenous RNAs (endo-siRNAs) provide some of the plasticity of the olfactory response. The biogenesis of these small RNAs is as mysterious as their regulation by experience and I will describe our attempts to understand each process. Within the circuit, I will touch upon how we are examining synaptic remodeling in development and in the adult animal as it adapts to novel stimuli and metabolic stress. The optical transparency of C. elegans provides a unique window into the real time dynamics of circuits. To take advantage of this, we are developing visual reporters for simultaneous imaging of several aspects of neuronal physiology: calcium transients, pH fluctuations, cGMP and cAMP fluxes and chromatin dynamics within the entire nervous system of the living, behaving animal. I will also present some of our recent findings that may link experience to inherited behaviors.


Nancy Xu

New Tools for Real-time Imaging of Single Live Cells 
Nancy Xu (Old Dominion University, Chemistry and Biochemistry)
April 30  •  1:00 pm  •  Davidson Math and Science 105

Current technologies are unable to real-time detect, image and study multiple types of molecules in single live cells with sufficient spatial and temporal resolutions and over an extended period of time. To better understand the cellular function in real-time, we have developed several new ultrasensitive nanobiotechnologies, including far-field photostable-optical-nanoscopy (PHOTON), photostable single-molecule-nanoparticle-optical-biosensors (SMNOBS) and single nanoparticle spectroscopy for mapping of dynamic cascades of membrane transport and signaling transduction pathways of single live cells in real time at single molecule and nanometer resolutions. We have demonstrated that these powerful new tools can be used to quantitatively image single molecules and study their functions in single live cells with superior temporal and spatial resolutions and to address a wide range of crucial biochemical and biomedical questions. The research results and experimental designs will be discussed in this seminar.


Libby Huber

Auditory perception and cortical plasticity after long-term blindness
Libby Huber (University of Washington, Vision and Cognition Group)
March 24 • 1:00 pm • Reynolds School of Journalism 101

Early onset blindness is associated with enhanced auditory abilities, as well as plasticity within auditory and occipital cortex. In particular, pitch discrimination is found to be superior among early-blind individuals, although the neural basis of this enhancement is unclear. In this talk, I will present recent work suggesting that blindness results in an increased representation of behaviorally relevant acoustic frequencies within both auditory and occipital cortex. Moreover, we find that individual differences in pitch discrimination performance can be predicted from the cortical data. The functional significance of group and individual level differences in frequency representation will be discussed, along with the relative importance of auditory and occipital cortical responses for acoustic frequency discrimination after long-term blindness.


Talia Retter

At face value: An introduction to fast periodic visual stimulation
Talia Retter (Catholic University of Louvain, Belgium, Psychological Sciences Research Institute)
March 12 • 1:00 pm • Reynolds School of Journalism 101

Fast periodic visual stimulation (FPVS) is a technique in which the presentation of stimuli at a constant rate elicits a neural response at that frequency, typically recorded with electroencephalogram (EEG). A Fourier Transform is applied to the EEG data to objectively characterize this response at a pre-determined frequency of interest. Although this technique has traditionally been applied to study low-level vision, it has recently been developed to implicitly measure high-level processes in the field of face perception. In the Face Categorization Lab at the University of Louvain, FPVS has been used to study individualization of facial identities (e.g., Liu-Shuang et al., 2014) and the discrimination of faces from other object categories (e.g., Rossion et al., 2015). During my time in this lab, I have tested experiments using FPVS regarding: 1) category-selective responses to natural face and non-face images; 2) examining the spatio-temporal dynamics of face-selective responses; and 3) adaptation to a specific facial identity. The results of these studies will be discussed both in light of their implications for our understanding of face perception and, more generally, as examples of the richness of this methodology for understanding high-level vision in humans.


Lara Krisst

Introspections about Visual Sensory Memory During the Classic Sperling Iconic Memory Task
Lara Krisst (San Francisco State University, Mind, Brain, & Behavior)
March 12 • 10:00 am • Reynolds School of Journalism 101

Visual sensory memory (or ‘iconic memory') is a fleeting form of memory that has been investigated by the classic Sperling (1960) iconic memory task. Sperling demonstrated that ‘more is seen than can be remembered,' or that more information is available to observers than what they can normally report about. Sperling established the distinction between ‘whole report' (response to a stimulus set of 12 letters) and what subjects report when cued (to a row of letters in the set), or ‘partial report.' In the whole report condition participants were able to report only between three and five of the 12 letters presented, however, participants' high accuracies across partial report trials revealed that, on a given trial, the information about the complete stimulus set is held in a sensory store momentarily. This finding demonstrates subjects were able to perceive more than they were originally able to report. In a new variant of the paradigm, we investigated participants' trial-by-trial introspections about what participants are, and are not, conscious of regarding these fleeting memories. Consistent with Sperling's findings, data suggest that participants believe that they could report, identify, or remember only a subset of items (~ 4 items). Further investigation with this paradigm, including examination of the neural correlates of the introspective process, may shed light on the neural correlates of visual consciousness.


Martha Merrow

The times of their lives: Developmental and circadian timing in C. elegans
Martha Merrow (Ludwig Maximilian University of Munich, Institute of Medical Psychology)
March 10 •  4:00 pm • Davidson Math and Science 102

Living organisms have developed a multitude of biological time-keeping mechanisms - from developmental to circadian (daily) clocks. Martha Merrow has been on the forefront of understanding the basic properties and molecular aspects of how the circadian clock synchronizes with environmental cues - from worms to yeast to fungi to humans. In addition to circadian clocks, she has been studying developmental clocks in worms and recently developed a new method to measure timing of larval development, which could be used to measure sleep-like properties in worms. She started working on biological clocks as a Post-Doctoral Fellow at the Darmouth Medical School, and is currently a Full Professor and Teaching Chair in the Institute of Medical Psychology at the Ludwig-Maximilians-Universität in Munich, Germany. Beyond her teaching and research, Martha also works on developing scientific networks for chronobiologists and for women in science.


John Serences

Attentional gain versus efficient selection: Evidence from human electroencephalography  
John Serences (UC San Diego, Psychology)
March 5 • 4:00 pm • Ansari Business 106

Selective attention has been postulated to speed perceptual decision-making via one of three mechanisms: enhancing early sensory responses, reducing sensory noise, and improving the efficiency with which sensory information is read-out by sensorimotor and decision mechanisms (efficient selection). Here we use a combination of visual psychophysics and electroencephalography (EEG) to test these competing accounts. We show that focused attention primarily enhances the response gain of early and late stimulus-evoked potentials that peak in the contralateral posterior-occipital and central posterior electrodes, respectively. In contrast with previous reports that used fMRI, a simple model demonstrates that response enhancement alone can sufficiently account for attention-induced changes in behavior even in the absence of efficient selection. These results suggest that spatial attention facilitates perceptual decision-making primarily by increasing the response gain of stimulus-evoked responses.


April Schweinhart

Changing what you see changes how you see: Analyzing the plasticity of broadband orientation perception
April Schweinhart (University of Louisville, Psychological and Brain Sciences)
February 25 • 11:00 am • Reynolds School of Journalism 101

Schweinhart's work using augmented reality shows that changing the way certain features are presented in an observer's environment triggers predictable changes in subsequent perception. Traditionally, vision science examined the perception of stimulus features in isolation. More recently, researchers have begun to investigate the perception of such features in context. Consider, for example, the perception of oriented structure: incoming visual signals are processed by neurons tuned in both size and orientation at the earliest cortical levels of the visual hierarchy. Interestingly, the distribution of orientations in the environment is anti-correlated with human visual perception. Though this correspondence between typical natural scene content and visual processing is compelling, until recently the relationships between visual encoding and natural scene regularities were necessarily limited to being static and correlational. This work takes into account the recent experience of the observer to determine the plasticity of perceptual biases related to environmental regularities.


Fall 2014

Optical deconstruction of fully-assembled biological systems
Karl Deisseroth (Stanford, Bioengineering)
October 23 • 7:00 pm • Davidson Math and Science 110

The journal Nature dubbed Karl Deisseroth "Method Man" for two groundbreaking techniques developed in his lab, Optogenetics and CLARITY. Both are game changers in the neuroscience world, revolutionizing the way scientists can study the brain. Optogenetics gives scientists the ability to turn neural activity on and off with light-driven switches. CLARITY turns a brain into a clear Jell-O like structure with all neurons intact, giving scientists an unprecedented view of the brain's molecules and cells. These tools allow neuroscientists to address fundamental questions about dynamic changes in brain structure and function. Deisseroth's group applies these strategies to better understand the biological basis for neurological and psychiatric diseases, and how the brain responds to learning, injury, and seizures.
Deisseroth serves on President Obama's BRAIN Initiative advisory committee. He is a member of the National Academy of Sciences and the recipient of dozens of prestigious national and international science awards.


Cuttlefish Camouflage 
Charlie Chubb (UC Irvine, Cognitive Sciences)
August 29 • 3:00 pm • Ansari Business 106

Cephalopods (squid, octopus and cuttlefish) have exceptional neurophysiologically controlled skin that can rapid change color, enabling them to achieve dynamic crypsis in a wide range of habitats. Chubb shows the range of camouflage patterns that cuttlefish (Sepia Officinalis) produces and discusses some of the remarkably subtle strategies these patterns use to elude detection. The animals' patterning responses are controlled by the visual input they receive which are sensitive to the visual granularity of the stimulus substrate relative to their own body size.
A deep mystery remains unresolved: cuttlefish have skin that enables them to produce four dimensions of chromatic variation which they use to achieve masterful matches to the colors of substrates in their natural environment. However, cuttlefish have only a single retinal photopigment; in other words, they are colorblind.


Spring 2014

Shinobu Kitayama

Cultural Neuroscience: Current Evidence and Future Prospect  
Shinobu Kitayama (University of Michigan, Psychology) 
April 18 • 3:00 pm • Mathewson-IGT Knowledge Center 124 (Wells Fargo Auditorium)

Cultural neuroscience is an emerging field that examines the interdependencies among culture, mind, and the brain. By investigating brain plasticity in varying social and ecological contexts, it seeks to overcome the nature-nurture dichotomy. In the present talk, after a brief overview of the field, I will illustrate its potential by reviewing evidence for cultural variations in brain mechanisms underlying cognition (i.e., holistic attention), emotion (i.e., emotion regulation), and motivation (i.e., self-serving bias). Directions for future research will be discussed.


Mel Goodale and Brian Bushway

Human echolocation: How the blind use tongue-clicks to navigate the world
Mel Goodale (University of Western Ontario, the Brain and Mind Institute) and Brian Bushway (World Access for the Blind)
March 13 • 7:00 pm • Davidson Math and Science 110

"I can hear a building over there" Everybody has heard about echolocation in bats and dolphins.  These creatures emit bursts of sounds and listen to the echoes that bounce back to detect objects in their environment. What is less well known is that people can echolocate, too. In fact, there are blind people who have learned to make clicks with their mouth and tongue - and to use the returning echoes from those clicks to sense their surroundings.  Some of these people are so adept at echolocation that they can use this skill to go mountain biking, play basketball, or navigate through unfamiliar buildings.  In this talk, we will learn about several of these echolocators - some of whom train other blind people to use this amazing skill. Testing in our laboratory has revealed that, by listening to the echoes, blind echolocation experts can sense remarkably small differences in the location of potential obstacles.  They can also perceive the size and shape of objects, and even the material properties of those objects - just by listening to the reflected echoes from mouth clicks. It is clear that echolocation enables blind people to do things that are otherwise thought to be impossible without vision, providing them with a high degree of independence in their daily lives. Using neuroimaging (functional magnetic resonance imaging or fMRI), we have also shown that the echoes activate brain regions in the blind echolocators that would normally support vision in the sighted brain. In contrast, the brain areas that process auditory information are not particularly interested in these faint echoes.  This work is shedding new light on just how plastic the human brain really is.   

About Melvyn Goodale: World's leading visual neuroscientist, Melvyn Goodale, is best known for his research on the human brain as it performs different kinds of visual tasks. Goodale has lead much neuroimaging and psychosocial research that has had an enormous influence in the life sciences and medicine. His "two-visual-systems proposal" is now part of almost every textbook in vision, cognitive neuroscience, and psychology. He is a member of the Royal Society, joining the likes of Sir Isaac Newton, Charles Darwin, Albert Einstein and Stephen Hawking.

About Brian Bushway: Brian is the program manager for World Access for the Blind, a non-profit organization which teaches mobility and sensory awareness orientation. He acts as a mobility coach for the blind and a teacher of sighted mobility instructors on the use of echolocation. He designs and implements perception development plans for each client. When not teaching, Brian offers technical and emotional advice to families. He lost his sight at 14.


 

Peter Tse

Chunking of visual features in space and time: Behavioral and neuronal mechanisms 
Peter Tse (Dartmouth, Psychological and Brain Sciences)
March 10 • 4:00 pm • Davidson Math and Science 104

We can learn arbitrary feature conjunctions when the to-be-combined features are present at the same time (Wang et al., 1994). This learning is underpinned by increased activity in visual cortex (Frank et al., 2013). I will discuss data that suggest that this kind of feature-conjunction perceptual learning requires attention, is not strongly retinotopic, and can even link features that do not appear at the same time.


Ioulia Kovelman

Building a Vision: Shared Multimodal Pediatric fNIRS Brain Imaging Facility at the University of Michigan 
Ioulia Kovelman (University of Michigan, Psychology)
February 18 • 4:00 pm • Mathewson-IGT Knowledge Center 124 (Wells Fargo Auditorium)

Kovelman's research interests are in language and reading development in monolingual and bilingual infants, children, and adults. It includes both typical and atypical language and reading development using a variety of behavioral and brain imaging methods (fMRI, fNIRS).


David Raizen

Using the worm to catch Z's: somnogen discovery in C. elegans 
David Raizen (University of Pennsylvania, Neurology)
February 7 • 11:00 am • Davidson Math and Science 104 

Quiescent behavioral states are universal to the animal world with the most famous and mysterious of these being sleep. Despite the fact that we spend one third of our life sleeping, and despite the fact that all animals appear to sleep, the core function of sleep remains a mystery. In addition, the molecular basis underlying sleep/wake regulation is poorly understood. Raizen uses C. elegans as a model system to address these questions. C. elegans offers many experimental advantages including powerful genetic tools as well as a simple neuroanatomy. Growth of C. elegans from an embryo to an adult is punctuated by four molts, during which the animal secretes a new cuticle and sheds its old one. Prior to each molt the worm has a quiescent behavioral state called lethargus. Lethargus has several similarities to sleep including rapid reversibility to strong stimulation, increased sensory arousal threshold, and homeostasis, which is manifested by an increased depth of sleep following a period of deprivation. Similarity to sleep at the molecular genetic level is demonstrated by the identification of signaling pathways that regulate C. elegans lethargus in the similar fashion to their regulation of sleep in mammals and arthropods. For examples, cAMP signaling promotes wakefulness and epidermal growth factor signaling promotes sleep in C. elegans and other organisms. The Raizen lab has identified new regulators of sleep like behavior in C. elegans and is currently studying how these regulators function to regulate sleep. By studying the purpose and genetic regulation of nematode lethargus, they hope to identify additional novel sleep regulators, and to gain insight into why sleep and sleep-like states had evolved, a central biological mystery.


Fall 2013

Theodore Huppert

Introduction to Functional Near-Infrared Spectroscopy (fNIRS)
Theodore Huppert (University of Pittsburgh, Radiology)
December 10 • 2:30 pm • Ansari Business 107

In this talk, Dr. Huppert will present the background theory behind fNIRS brain imaging. He will also introduce the basic concepts of data collection, analysis and interpretation of fNIRS.

Illuminating the Mind: Applications and Challenges for fNIRS
December 11 • 2:30 pm • Ansari Business 107

Functional near-infrared spectroscopy (fNIRS) is a non-invasive brain imaging technique that uses light to record changes in cerebral blood flow. This technology has several unique advantages including low cost, portability, and versatility which have opened several new areas of brain imaging research. In this talk, Dr. Huppert will present an overview of some of these novel applications for fNIRS technology that are being conducted at the University of Pittsburgh, including brain imaging of balance and mobility disorders, child and infant psychology, and multimodal neuroimaging. He will also discuss some of the unique challenges of using fNIRS in "real-world" brain imaging experiments.


John Rothrock

Understanding Migraine: Genetics, Epigenetics and Receptor Sensitivity
John Rothrock (Renown Health Institute for Neurosciences)
November 5 • 2:30 pm • Center for Molecular Medicine 111

Despite its high prevalence, migraine remains poorly understood by the lay public and health care providers (HCPs) alike. Many migraine patients who seek medical attention are disappointed by the experience, and many HCPs feel at a loss when confronted by treatment-refractory patients. That migraine can be difficult to treat is hardly surprising. This common, easily recognized and clinically stereotyped disorder is polygenetic in origin, and the familiar symptoms of migraine consequently may be generated by a variety of biologic pathways. To complicate matters further, the clinical expression of migraine's genetic predisposition may be influenced by a number of factors, epigenetic and otherwise. Finally, migraine is comorbid with conditions and diseases that may complicate management of the headache disorder; these comorbidities include depression, bipolar disorder, anxiety disorders, sleep disorders and epilepsy. Despite this, a better understanding of migraine's biogenesis has led to the development of new therapies relatively specific to the disorder and unprecedented in their efficacy.


Dragana Rogulja

Cell cycle genes repurposed as sleep factors
Dragana Rogulja (Harvard Medical School, Neurobiology)
October 18 • 11:00 am • Davidson Math and Science 104

A remarkable change occurs in our brains each night, making us lose the essence of who we are for hours at a time: we fall asleep. A process so familiar to us, sleep nevertheless remains among the most mysterious phenomena in biology. The goal of our work is to understand how the brain reversibly switches between waking and sleep states, and why we need to sleep in the first place. To address these questions, Rogulja uses Drosophila melanogaster as a model system, because sleep in the fly is remarkably similar to mammalian sleep. Flies have consolidated periods of activity and sleep; arousal threshold is elevated in sleeping flies; the brain's electrical activity differs between sleeping and awake flies. As in people, both circadian and homeostatic mechanisms provide input into the regulation of fly sleep: flies are normally active during the day and quiescent at night, but if deprived of sleep will show a consequent increase in "rebound" sleep, regardless of the time of day.


Alison Harris

HD-EEG Analysis Workshop
Alison Harris (Claremont McKenna College, Psychology)
October 18 • 10:00 am • Neuroimaging Core, Mack Social Science 412

Event-related brain dynamics of value and decision-making
October 18 • 3:30 pm • Ansari Business 101

From selecting a snack in the supermarket to allocating financial resources, our lives are filled with choices. Emerging research from human neuroimaging suggests that a common neural circuitry underlies such disparate decisions: in particular, the ventromedial prefrontal cortex (vmPFC) has been associated with subjective value across a wide variety of tasks and goods. However, due to the inherent limitations of hemodynamic measures, comparatively little is known about when and how the vmPFC computes value signals across the time course of decision. Harris will discuss research exploiting the high temporal resolution and whole-brain coverage of event-related potentials (ERP) in order to examine the dynamic construction of value signals. Combined with advanced statistical and source reconstruction techniques, this novel approach reveals that neural activity correlated with subjective preference emerges approximately 400 ms after stimulus onset, localized to regions including vmPFC. Reflecting the integration of sensory attribute information, activity in this time window is also modulated by top-down goals (e.g., weight loss) through connections with dorsolateral prefrontal cortex. Together these results highlight the utility of ERP in understanding the cortical dynamics of decision-making, providing a fuller picture of how neural signals of subjective value emerge in the time leading up to choice.

Take the next step...