Past NeuroLecture Speakers

Fall 2018

  1. Modeling Color Appearance
    Timothy Satalich, University of California, Irvine

  2. Color Experience in Observers with Potential Tetrachromat Photopigment Opsin Genotypes
    Kimberly Jameson, University of California, Irvine

  3. Cortical correlates of amblyopia: What information lost and why?
    Lynne Kiorpes, New York University

  4. What can aftereffects reveal about the functional architecture of human gaze perception?
    Colin Clifford, University of New South Wales

Summer 2018

  1. Harnessing the power of ‘visual' art: Memory-drawing training drives rapid neuroplasticity & enhances cognition
    Lora Likova, Smith-Kettlewell Eye Research Institute

Spring 2018

  1. Adaptive Optics Microstimulation: Investigating the Contribution of Single Cone Photoreceptors to Visual Perception
    Alexandra Boehm, UC Berkeley

  2. Change Detection in Complex Auditory Scenes
    Joel Snyder, University of Nevada, Las Vegas

  3. A Chorus of Clocks: Avian Circadian Organization and the Daily and Seasonal Control of Birdsong
    Vincent Cassone, Texas A&M University

  4. Understanding the Remarkable Accuracy of Colour Perception
    Lorne Whitehead, University of British Columbia

  5. Studies of Sex, Eyes, and Vision: Importance of Estrogen, SWS Cones, and Lite Beer
    Alvin Eisner, Portland State University

  6. Duck! How Your Brain Works Out 3D Motion from 2D (Retinal) Sensory Signals
    Lawrence K. Cormack, University of Texas at Austin

  7. How the Olfactory Experience Sculpts the Olfactory System
    Stephen Santoro, University of Wyoming (Zoology & Physiology)

  8. Computational and Cortical Modeling of Lightness and Color Perception
    Michael E. Rudd, University of Washington (Physiology & Biophysics)

  9. Reasoning with Uncertainty the Bayesian way with examples in Cognitive Modeling in R and Stan
    A. Grant Schissler, University of Nevada, Reno (Mathematics & Statistics)

Fall 2017

  1. Top-down and Bottom-up Modulation of Neural Coding in the Somatosentory Thalamus
    Qi Wang, Columbia University

  2. Adaptation to the Variability of Visual Information
    John Maule, University of Sussex

  3. Spatial Vision at the Scale of the Cone Photoreceptor Mosaic
    David Brainard, University of Pennsylvania

  4. Action Selection According to Ideomotor Theory: Basic Principles and an Application to Multitasking
    Markus Janczyk, Eberhard Karls University of Tübingen

  5. Multidimensional Estimation of Color Matching Functions
    Eric Walowit

  6. Symposium - 9th Annual, Sierra Nevada Chapter of the Society for Neuroscience (pdf)
    October 26 • 12:00 pm-5:00pm • Pennington Health and Science 102

  7. Clark Elliott

Summer 2017

  1. The Adult Face-Diet Revealed: Impact of Daily Face Exposure on the Perception of Faces
    Ipek Oruc, University of British Columbia

  2. Selectivity, Hyper-selectivity, and a General Model of the Non-linearities of Neurons in the Visual Pathway
    David Field, Cornell University

  3. Development of Neural Mechanisms Underlying Face Recognition Ability
    Vaidehi S. Natu, Stanford University

Spring 2017

  1. Individual Differences in Attention Filters During Perceptual Decision Making
    Ramesh Srinivasan, UC Irvine

  2. The Lawful Imprecision of Human Surface Tile Estimation in Natural Scenes
    Johannes Burge, University of Pennsylvania

  3. The Neural Circuitry of Skilled Reading
    Jason Yeatman, University of Washington

  4. New Codes Within Genetic Codons: Codon Usage Determiens Protein Structure and Gene Expression Levels
    Yi Liu, University of Texas Southwestern Medical Center

  5. Brown is Not Just Dark Yellow
    Steven Buck, University of Washington

  6. Color Algebras
    Jeffrey B. Mulligan, NASA Ames Research Center

  7. Seeing Through Manipulated Optics
    Susana Marcos, Instituto de Optica

  8. The Adaptive Brain: Learning to See in Altered Worlds
    Stephen A. Engel, University of Minnesota

  9. Multivariate Pattern Analysis (MVPA) of Neuroimaging Data
    Sara Fabbri, University of Nevada, Reno

Fall 2016

  1. Localizing the Source of Dual-Task Costs and Between-Task Interference in PRP-like Tasks
    Moritz Durst, University of Tübingen

  2. Mechanisms of Stimulus Discrimination: Temporal Order Effects and the Internal Reference Model
    Ruben Ellinghaus, University of Tübingen

  3. The Time Course of the Effect of Color Terms on Color Processing
    Lewis Forder, University of Wisconsin-Madison

  4. The Interface Theory of Perception
    Don Hoffman, UC Irvine

  5. Small Towns, Visual Ecology, and Face Recognition
    Benjamin J. Balas, North Dakota State University

  6. Seeing Color Constancy in a Contemporary Light
    Anya Hurlbert, Newcastle University

  7.  

    1. Color Processing in Peripheral Vision: Basic and Clinical Implications
      Christopher W. Tyler, Smith-Kettlewell Brain Imaging Center

    2. Novel Insights in the Leonardo/Michelangelo Rivalry
      Christopher W. Tyler, Smith-Kettlewell Brain Imaging Center

Spring 2016

  1. Impact of Distorted Optics on Spatial and Depth Vision - Lessons from Human Disease Models
    Shrikant Bharadwaj, LV Prasad Eye Institute

  2. Scotopic Vision and Aging
    Megan Tillman, UC Davis

  3. Synchronization of Circadian Clocks to Daily Environmental Cycles
    Patrick Emery, University of Massachusetts Medical School

  4. Roles of Cortical Single- and Double-Opponent Cells in Color Vision
    Robert Shapley, New York University

  5. Pulse Trains to Percepts: The Challenge of Creating a Perceptually Intelligible World with Sight Recovery Techniques
    Ione Fine, University of Washington

  6. Color Vision in the Peripheral Retina
    Vicki Volbrecht, Colorado State University

  7. Transcriptional Regulation of Heart Development and Chromatin Structure
    Benoit Bruneau, UCSF

  8. Health Law Implications of Advances in Neuroscience, Including Neuroimaging
    Stacey Tovino, UNLV

  9. Perceptual Resolution of Color with Ambiguous Chromatic Neural Representations
    Steven Shevell, University of Chicago

  10. Understanding Person Recognition: Psychological, Computational, & Neural Perspectives
    Alice O'Toole, University of Texas at Dallas

  11. Color Naming, Color Communication and the Evolution of Basic Color Terms
    Delwin Lindsey, Ohio State University

  12. Critical Immaturities Limiting Infant Visual Sensitivity
    Angela Brown, Ohio State University

Fall 2015

  1. Neural Mechanisms of Distractor Suppression

    Steve Luck, UC Davis

  2. Interplay between posttranslational modifications regulate animal circadian clock

    Joanna Chiu, UC Davis

  3. Optimizing monovision and multifocal corrections
    Pablo de Gracia, Barrow Neurological Institute

  4. Discovering Sensory Processes Using Individual Differences: A Review and Factor Analytic Manifesto
    David Peterzell, John F. Kennedy University

  5. Mapping, Modeling and Decoding the Human Brain Under Naturalistic Conditions
    Jack Gallant, UC Berkeley

  6. Spatial hearing and the brain: Assembling binaural information to understand auditory space
    G. Christopher Stecker, Vanderbilt University

Spring 2015

  1. Understanding face perception with fast periodic visual stimulation
    Bruno Rossion, Catholic University of Louvain

  2. Brain plasticity underlying sight deprivation and restoration: A complex interplay
    Olivier Collignon, University of Trento

  3. Understanding kids who don’t talk: Using EEG to measure language in minimally verbal children with ASD
    Charlotte DiStefano, UCLA

  4. Endogenous RNAi and behavior in C. elegans
    Noelle L'Etoile, UCSF

  5. New Tools for Real-time Imaging of Single Live Cells
    Nancy Xu, Old Dominion University

  6. Auditory perception and cortical plasticity after long-term blindness
    Libby Huber, University of Washington

  7. At face value: An introduction to fast periodic visual stimulation
    Talia Retter, Catholic University of Louvain

  8. Introspections about Visual Sensory Memory During the Classic Sperling Iconic Memory Task
    Lara Krisst, San Francisco State

  9. The times of their lives: Developmental and circadian timing in C. elegans
    Martha Merrow, Ludwig Maximilians University Munich

  10. Attentional gain versus efficient selection
    John Serences, UC San Diego

  11. Changing what you see changes how you see: Analyzing the plasticity of broadband orientation perception
    April Schweinhart, University of Louisville

Fall 2014

  1. Optical deconstruction of fully-assembled biological systems
    Karl Deisseroth, Stanford

  2. Cuttlefish Camouflage
    Charlie Chubb, UC Irvine

Spring 2014

  1. Cultural Neuroscience: Current Evidence and Future Prospect
    Shinobu Kitayama, University of Michigan

  2. Human echolocation: How the blind use tongue-clicks to navigate the world
    Mel Goodale, The Brain and Mind Institute, and Brian Bushway, World Access for the Blind

  3. Chunking of visual features in space and time: Behavioral and neuronal mechanisms
    Peter Tse, Dartmouth

  4. Building a Vision: Shared Multimodal Pediatric fNIRS Brain Imaging Facility at the University of Michigan
    Ioulia Kovelman, University of Michigan

  5. Using the worm to catch Z's: somnogen discovery in C. elegans
    David Raizen, University of Pennsylvania

Fall 2013


    1. Introduction to Functional Near-Infrared Spectroscopy (fNIRS)

      Theodore Huppert, University of Pittsburgh

    2. Illuminating the Mind: Applications and Challenges for fNIRS
      Theodore Huppert, University of Pittsburgh

  1. Understanding Migraine: Genetics, Epigenetics and Receptor Sensitivity
    John Rothrock, Renown Institute Neurosciences

  2. Cell cycle genes repurposed as sleep factors
    Dragana Rogulja, Harvard Medical School

  3.  

    1. HD-EEG Analysis Workshop
      Alison Harris, Claremont McKenna College

    2. Event-related brain dynamics of value and decision-making
      Alison Harris, Claremont McKenna College


Fall 2018

Timothy Satalich

Modeling Color Appearance
Timothy Satalich, University of California, Irvine (Institute for Mathematical Behavioral Sciences)
Nov 13 • 11:00 am • Mack Social Science, 125

The domain of color appearance models has been dominated by the Munsell Color System for over 100 years. A variety of attempts have been made to supplant it but it has been resilient to these efforts because of its simplicity and its accuracy as a model for how humans perceive the relationships between colors of different Hue, Saturation and Lightness. It has also been one of the most studied and validated models of color appearance as evidenced by one of the largest psychophysical studies ever attempted. The relationship of the Munsell color appearance space to physical measurements of the power/energy distribution of luminous reflectance across the range of normal human color vision is by no means simple. There are many known effects in color vision that should be accounted for if we are to model color appearance from spectrographic reflectance measurements of colors. I will be presenting an in-depth look into the geometries and features of the spaces of reflectance spectra and the Munsell color appearance solid and what set of transformations of the physical space are useful to approximate color appearance space.


Kimberly Jameson

Color Experience in Observers with Potential Tetrachromat Photopigment Opsin Genotypes
Kimberly Jameson, University of California, Irvine (Institute for Mathematical Behavioral Sciences)
Nov 13 • 10:00 am • Mack Social Science, 125

Traditional color vision theory posits that three types of retinal photopigments transduce light into a trivariate neural color code, thereby explaining color-matching behaviors. Molecular genetics results suggest that a percentage of women possess genetic potential for more than three classes of retinal photopigments, which raises the question of whether four-photopigment retinas necessarily yield trichromatic color perception. I will review results and theory underlying the accepted photoreceptor-based model of color perception, as well as the psychological literature showing gender-linked differences in color perception, to explore whether possession of more than three retinal photopigment classes influences color perception relations. I will review genetic analyses that examine important positions in the opsin gene sequence, as well as some research that empirically compares the color perception of individuals possessing more than three retinal photopigment genes with those possessing fewer retinal photopigment genes. Results show some women with four-photopigment genotypes perceive significantly different chromatic appearances compared to both male and female trichromat controls. I discuss a rationale for this finding and discuss implications for theories of color perception and gender differences in color behavior.


Lynne Kiorpes

Cortical correlates of amblyopia: What information lost and why?
Lynne Kiorpes, New York University (Neural Science & Psychology)
Sep 20 • 2:00 pm • JCSU Theater

Amblyopia is a developmental disorder of vision resulting from abnormal binocular visual experience in childhood. Psychophysical studies in amblyopic humans and macaques show losses in basic spatial vision (acuity and contrast sensitivity), but in addition, there are extensive losses in higher order perceptual abilities. These deficient higher order abilities, such as global form and motion perception and perceptual organization, are not predictable from the loss in acuity. Neurophysiological studies of striate cortex in behaviorally characterized amblyopic macaques show that some aspects of amblyopia are reflected in the properties of single neurons, but that overall neural sensitivity far exceeds behavioral sensitivity. Therefore there is information available in the visual system that is not being used to guide visual performance. What information is lost and what mechanisms contribute to that loss? Our recent studies show that some answers can be found via population analyses at the level of striate and extrastriate cortex, and characterizing neural interactions under dichoptic viewing. Current thinking about the neural correlates of amblyopia will be discussed.


Colin Clifford

What can aftereffects reveal about the functional architecture of human gaze perception?
Colin Clifford, University of New South Wales (Psychology)
Sep 20 • 2:00 pm • JCSU Theater

The direction of another's gaze provides a strong cue to their intentions and future actions. The perception of gaze is a remarkably plastic process: adaptation to a particular direction of gaze over a matter of seconds or minutes can cause marked aftereffects in the perceived direction of others' gaze. Computational modelling of data from behavioural studies of gaze adaptation allows us to make inferences about the functional principles that govern the neural encoding of gaze direction. This in turn provides a foundation for testing computational theories of neuropsychiatric conditions in which gaze processing is compromised, such as autism.

Summer 2018

Lora Likova

Harnessing the power of ‘visual' art: Memory-drawing training drives rapid neuroplasticity & enhances cognition
Lora Likova, Smith-Kettlewell Eye Research Institute
July 27 • 11:30 am • Reynolds School of Journalism, 101

The mechanisms of adult neuroplasticity remain elusive, and can best be studied with an effective training intervention. Drawing, and in particular, memory drawing, has the unique advantage of orchestrating a wide-range of cognitive functions. Based on a novel conceptual framework, which postulates that space transcends any sensory modality and, consequently, drawing can transcend vision and be considered spatial art rather than solely visual art. I have developed a memory-guided drawing intervention, the Cognitive-Kinesthetic (C-K) Drawing Training, to study a broad range of causative mechanisms of brain reorganization in a diverse sample of blind individuals. Only 10 hours of C-K training enabled all blind participants to draw faces and objects free-hand, guided solely by tactile memory. Moreover, pre/post testing on a standardized test battery for spatial cognition in blindness and low vision, showed a transfer of the training effect to improvements of core cognitive functions such as working memory, spatial analysis, concept learning and navigation beyond the drawing task per se. In the brain, the enhanced activation of the primary visual cortex (V1) implicated this ‘spatial map' as the neural implementation of the visuo-spatial ‘sketchpad' for working memory (but in a supramodal form, in the absence of visual experience). Comparative pre/post Granger Causality analysis revealed that the training strengthened the top-down activation of V1 by structures of the hippocampal complex, supporting the V1 memory-sketchpad concept, and showing that the C-K training has the power to enhance brain connectivity from high-order memory mechanisms. All but one of the participants were right-handed; in the left-handed blind individual, the C-K training was sufficient to generate a profound switch in the cortical lateralization of motor control, providing new insights into the neuroplasticity of motor control architecture. These results manifest the power of art training to change both brain and behavior even in the visually impaired, underlining its importance in education, neurorehabilitation and society as a whole.

Spring 2018

Alexandra Boehm

Adaptive Optics Microstimulation: Investigating the Contribution of Single Cone Photoreceptors to Visual Perception
Alexandra Boehm, UC Berkeley (Biomedical Optics, Neuroscience, & Neurobiology)
May 10 • 2:30 pm • Reynolds School of Journalism, 101

Vision begins at the level of the photoreceptors, the light sensitive cells in the retina. In human vision, there are three types of cone photoreceptors: The S, M and L cones, sensitive to short, middle and long wavelengths of the visible spectrum, respectively. How the signals from the three types of cone are combined by the post-receptoral visual system and ultimately arise in perception has been a question of much interest to vision scientists since Thomas Young and Hermann von Helmholtz first hypothesized the existence of three spectrally distinct cone types in the 1800s. While there has been much progress in the fields of anatomy, physiology and psychophysics, there remains a large gap in the understanding of how the activity of single cells contributes to perception. There are two reasons for this. First, the eye is constantly in motion. Even when a small spot stimulus is viewed under steady fixation it is sampled by hundreds of photoreceptors as the eye motion sweeps the stimulus across the retina. Second, the optical imperfections of the eye blur a stimulus across multiple cones. In the last decade, technological advancements in the areas of in-vivo optical imaging and eye tracking have expanded the repertoire of psychophysical experiments. These technologies have made it possible to simultaneously image, track, and present stabilized stimuli to targeted retinal locations. When used in conjunction with adaptive optics, an optical imaging technique which utilizes a deformable mirror to correct for the optical imperfections of the eye, the spatial resolution of the stimulus approaches the diffraction limit, making it possible to deliver stimuli to individual cone photoreceptors. In this talk I will first discuss the challenges that are unique to these types of studies and the technology that has been developed in the Roorda lab at UC Berkeley to overcome them. Next, I will discuss some of the ongoing studies in the lab in the areas of color and spatial vision.


Joel Snyder

Change Detection in Complex Auditory Scenes
Joel Snyder, University of Nevada, Las Vegas (Psychology)
May 4 • 11:00 am • Reynolds School of Journalism, 101

Auditory scene analysis' is the ability to perceptually segregate sounds in complex scenes that have multiple objects producing acoustic energy. Since the 1950s, scientists have made substantial progress in understanding how auditory scene analysis works for relatively simple, un-naturalistic sound patterns. However, studies have focused much less on understanding perception of more natural soundscapes that are relevant to social, workplace, and military situations. This talk will summarize our work on ‘change deafness' (the auditory analogue of ‘change blindness'), which refers to the surprising degree of difficulty that people have noticing when objects are added, removed, or replaced in complex scenes. This work goes beyond past research on auditory scene analysis by elucidating processing mechanisms that are likely active after perceptual segregation has occurred. It also highlights the importance of attention, memory, semantic meaning, and auditory object representations while noticing changes in everyday situations. Finally, it suggests a number of similarities between high-level mechanisms of auditory and visual perception. The work we have performed will likely be useful for guiding the principled design of training procedures for enhancing auditory perception in complex scenes.


Vincent Cassone

A Chorus of Clocks: Avian Circadian Organization and the Daily and Seasonal Control of Birdsong
Vincent Cassone, Texas A&M University (Biology)
April 27 • 12:00 pm • Davidson Math and Science, 102

Bird song is one of the few non-human forms of syntactical and contextual of animal communication. A complex of brain structures collectively called the "song control system" regulates bird song. Much research has been conducted showing gonadal and non-gonadal control of bird song and bird song structures. Among non-gonadal control mechanisms, we have found that song control brain structures express melatonin receptors, and that bird song behavior and bird song control structures are regulated on a daily and seasonal basis by pineal melatonin in house sparrows and zebra finches. We are studying the mechanisms by which melatonin and the circadian system synchronizes bird song to the time of day and the time of year. Song behavior is regulated by photoperiod in both male and female house sparrows, but only male vocalization is affected by pinealectomy. Long durations of melatonin, indicative of winter, decrease song structure size, amount and complexity of song vocalizations in male birds, while short durations enable increases in song structure size and the amount and complexity of vocalization. It is clear that the avian circadian system contributes to the control of bird song. Conversely, bird song itself may serve as a Zeitgeber for entrainment of circadian rhythms of avian locomotor and song behavior.


Lorne Whitehead

Understanding the Remarkable Accuracy of Colour Perception
Lorne Whitehead, University of British Columbia (Physics and Astronomy)
April 20 • 11:00 am • Reynolds School of Journalism, 101

This presentation is intended primarily for non-experts in color vision and those who teach this subject. Most students are aware that colour vision arises from spectrally selective phototransduction in retinal cells. However, that picture offers little insight into how and why people usefully perceive the colours of surfaces, which was the primary evolutionary driver for color vision. With an emphasis on simple principles of physics and information processing, I will describe a recent collaborative effort that helps to explain the remarkable accuracy of the perception of surface colors. This explanation avoids complex mathematics and quantitatively models plausible physiological processes. I hope that, after this presentation, audience members will be able to describe, in general terms, the complex nature of the colour vision system, and how this complexity yields simple information that usefully characterizes the materials around us.


Alvin Eisner in front of Twin Lakes, Mt. Hood

Studies of Sex, Eyes, and Vision: Importance of Estrogen, SWS Cones, and Lite Beer
Alvin Eisner, Portland State University (Institute on Aging/OHSU-PSU School of Public Health)
April 6 • 3:00 pm • Reynolds School of Journalism, 101

Historically, little attention has been paid to effects of hormonal change on visual function. This absence stems from many factors. Practical difficulties conducting interdisciplinary research may contribute to recalcitrant "basic" vs. "clinical" dichotomies, and tacit assumptions can lead to overgeneralization and consequent under-recognition of meaningful between-person differences. I will present data - from studies employing distinctively different subject populations - collectively showing that changes in estrogenic activity can impact vision mediated via Short-Wavelength-Sensitive (SWS) cones. SWS cones signal to the visual cortex mainly via a restricted set of neural pathways, and it has long been known that certain test/background stimulus conditions allow threshold-level incremental test stimuli to be detected via the differential response of SWS cones.  Thus, it was surprising initially to find that ~1/3 of healthy postmenopausal women report that a short-wavelength test stimulus appears white at threshold rather than bluish and/or reddish, as typically experienced by men of all ages.  Studies conducted subsequently suggested that the white color appearance involves sluggishness of visual response(s). SWS-cone-mediated response evidently can be affected by the selective-estrogen-receptor-modulator (SERM) tamoxifen, by aromatase inhibition (which abolishes estrogen synthesis), and even by phytoestrogen consumption. In addition, light adaptation within SWS cone pathways varies cyclically along with menstrual phase for some young women (with PMS?), and this variation can be altered by oral contraceptive use.  Implications of these and other results will be discussed, e.g., as concerns breast cancer survivorship.


Larry Cormack

Duck! How Your Brain Works Out 3D Motion from 2D (Retinal) Sensory Signals
Lawrence K. Cormack, University of Texas at Austin (Psychology/Neuroscience)
March 30 • 11:00 am • Reynolds School of Journalism, 101

Animals, by definition, move. For animals to interact with one another and to cope with their own animation, they must be able to correctly perceive movement, especially relative movement towards the animal itself (3D motion). For creatures of light and air, such as ourselves, this is accomplished through a binocular visual system. Visual systems, however, employ eyeballs, which have an inherently 2D sensor surface. The brain thus faces the challenge of recovering estimates of motion through a 3D environment given an input that lacks any explicit coding of the 3rd dimension. Over the past decade or so, using a coordinated combination of behavioral experiments, functional magnetic resonance imaging, and cortical electrophysiology, we have learned a great deal about how the brain meets this challenge. The end result compels a surprising and substantial reformulation of the "standard model" of cortical motion processing in primates.


Stephen Santoro

How the Olfactory Experience Sculpts the Olfactory System
Stephen Santoro, University of Wyoming (Zoology & Physiology)
March 14 • 4:00 pm • Davidson Math & Science


Michael Rudd

Computational and Cortical Modeling of Lightness and Color Perception
Michael E. Rudd, University of Washington (Physiology & Biophysics)
February 23 • 11:00 am • Reynolds School of Journalism, 101

I will describe some psychophysical experiments conducted in my lab to study effects of spatial context on the lightness and color in simple visual displays. The perceptual results will then be combined with neurophysiological data from other labs to motivate a computational theory of cortical color processing. The theory assumes that spatially-directed luminance change at luminance edges, and within luminance gradients, are computed in cortical areas V1 and V2, then the relevant neuronal outputs are spatially integrated at a higher stage of cortical processing (probably in area V4, TEO, or TE) to compute surface color. The model architecture suggests that the neural correlate of surface color percepts arises late in the cortical ventral pathway, near the areas associated with object perception. The neural computations assumed by the model share algorithmic properties with Land's early Retinex color vision model, which was designed to achieve color constancy under the challenge of changes in overall illumination. However, the cortical model is considerably more complex than Retinex in that it incorporates additional properties of visual neural processing, including top-down influences (attention), neural analysis at different spatial scales, and different neural gains of ON- and OFF- cells. To help illustrate the behavior of the model, I will demonstrate how it explains various visual illusions-including classical contrast and assimilation phenomena, brightness and color filling-in, and a new illusion in which surrounding a light or dark patch with a luminance gradient can reverse the perceived contrast polarity of the patch-as results of the misapplication to artificial stimuli of biological computations that evolved to support color constancy under natural viewing conditions.


A. Grant Schissler

Reasoning with Uncertainty the Bayesian way with examples in Cognitive Modeling in R and Stan  
A. Grant Schissler, University of Nevada, Reno (Mathematics & Statistics)
February 9 • 11:00 am • Reynolds School of Journalism, 101

Scientific discovery and learning from data are challenging tasks. Understanding uncertainty through statistical modeling is essential in all areas of research. However, widely-used statistical constructions including the infamous P-value often do more harm than good in the pursuit of knowledge. Indeed, P-values are under attack in both statistical and domain-specific communities. In this talk, I'll discuss some problems in P-value decision theoretic reasoning and present a flexible and philosophical coherent strategy - Bayesian modeling and inference. Recent advances in computation and software provide extremely fast and relatively straightforward implementation of complex Bayesian models. After providing background on Bayesian modeling in general, we'll walk through some examples in cognitive modeling, such as inferring IQ scores using Gaussian processes, hierarchical signal detection, and psychophysical functions. The case studies will be demonstrated in R and Stan, and code will be provided to serve as templates. The session will conclude with ample time for discussion.


Fall 2017

Qi Wang

Top-down and Bottom-up Modulation of Neural Coding in the Somatosentory Thalamus
Qi Wang, Columbia University (Biomedical Engineering)
December 15 • 11:00 am • Reynolds School of Journalism, 101

The transformation of sensory signals into spatiotemporal patterns of neural activity in the brain is critical in forming our perception of the external world. Physical signals, such as light, sound, and force, are transduced to neural electrical impulses, or spikes, at the periphery, and these spikes are subsequently transmitted to the neocortex through the thalamic stage of the sensory pathways, ultimately forming the cortical representation of the sensory world. The bottom-up (by external stimulus properties) or top-down (by internal brain state) modulation of coding properties of thalamic relay neurons provides a powerful means by which to control and shape information flow to cortex. My talk will focus on two topics. First, I will show that sensory adaptation strongly shapes thalamic synchrony and dictates the window of integration of the recipient cortical targets, and therefore switches the nature of what information about the outside world is being conveyed to cortex. Second, I will discuss how the locus coeruleus - norepinephrine (LC-NE) system modulates thalamic sensory processing. Our data demonstrated that LC activation increased the feature sensitivity, and thus information transmission while decreasing their firing rate for thalamic relay neurons. Moreover, this enhanced thalamic sensory processing resulted from modulation of the dynamics of the thalamorecticulo-thalamic circuit by LC activation. Taken together, an understanding of the top-down and bottom-up modulation of thalamic sensory processing will not only provide insight about neurological disorders involving aberrant thalamic sensory processing, but also enable the development of neural interface technologies for enhancing sensory perception and learning.


John Maule

Adaptation to the Variability of Visual Information
John Maule, University of Sussex (Psychology)
December 5 · 12:00 pm • Schulich Lecture Hall 3

The sensory signals we can detect from the world are highly variable and the brain has a large amount of information to process, encode and represent percepts. One way in which the visual system can reduce its processing load is to use summary statistics - representing the mean of features in the set, rather than individual exemplars. It has previously been found that observers are able to extract the mean hue from a rapidly-presented ensemble of colours (e.g. Maule & Franklin, 2016). This ability has been demonstrated for other stimulus domains, including orientation, size and facial expression. In addition to summary statistics of central tendency, it may also be useful for the visual system to encode information about the variation present in visual features. I will present a series of experiments investigating the encoding of variance for colourful ensembles. Ensemble variance was controlled by varying the difference in hue (in CIELUV colour space) between different elements. Observers viewed pairs of ensembles situated to the left and right of a central fixation point. During the adaptation phase there was a consistent relationship between the amount of variance in each ensemble (e.g., left more variable in hue than right). On test trials observers judged which ensemble appeared more variable. Generally, following exposure to highly variable ensembles on the left observers perceived a pair of equally variable ensembles as relatively less variable on the left compared to the right of the display. This result is similar to that shown by Norman et al. (2015) for ensembles of orientation, suggesting that representation of the variance independent of the central tendency may be a general feature of visual coding. The results imply that perceived variability of a multi-coloured ensemble is subject to adaptation after-effects, and therefore that colour variance is an encoded property of visual sets. The value of encoding variability may be in tuning the brain to the visual properties of the immediate surroundings, allowing the brain to better predict the content of the environment and represent salient elements.


David Brainard

Spatial Vision at the Scale of the Cone Photoreceptor Mosaic
David Brainard, University of Pennsylvania (Psychology)
November 17 • 12:00 pm • Reynolds School of Journalism 101

The long-term goal of the research here is to understand how the visual system integrates information from individual cones in the photoreceptor mosaic, to produce the high-resolution percept of a colored world that we enjoy. A particular richness of this question derives from the observation that there are three distinct spectral classes of cones in the mosaic, and that cones of these classes are arranged in an interleaved fashion. To understand how signals from individual cones are combined, we have begun to employ adaptive optics retinal imaging together with real time eye tracking to conduct psychophysical experiments. In these experiments, stimuli whose scale approaches that of individual cones are targeted to precisely defined retinal locations, and we measure detection thresholds as the spatial structure of the stimuli is varied. In this talk, I will describe our methods along with initial results that examine spatial summation for human foveal vision, with the smallest stimuli matched in size to the acceptance aperture of a single cone.


 Markus Janczyk

Action Selection According to Ideomotor Theory: Basic Principles and an Application to Multitasking
Markus Janczyk, Eberhard Karls University of Tübingen (Psychology)
November 6 · 11:30 am • Reynolds School of Journalism 101

Human act to achieve certain goals. Ideomotor Theory, advanced in the 19th century by several philosophers, claims that actions can only be addressed by mentally anticipating the desired goal states. This idea was rarely investigated in psychology for a long time, but during the last decades several lines of empirical investigation were pursued with results being in line with Ideomotor Theory.

I will begin this talk with an introduction into Ideomotor Theory and the main evidence from recent studies, followed by a brief introduction into dual-tasking models. I will then bring these fields together and sketch several lines of research investigating the role of goal anticipations for dual-task performance. In sum, the (1) capacity-limited stage of processing - assumed to be the cause of dual-task problems - can be described as goal anticipation, the (2) commensurability of goal states affects the amount of dual-task problems, and (3) monitoring the occurrence of pursued goal states also incurs costs. These results suggest that anticipating goal states is an important contribution to dual-task problems.


Eric Walowit

Multidimensional Estimation of Color Matching Functions
Eric Walowit
November 3 · 1:00 pm • Reynolds School of Journalism 101

For many industrial and academic applications, it is essential to know the color responses of observers to arbitrary scene spectral radiances. The spectral response of an observer can be defined as the detected quantum efficiency resulting from radiation of a given wavelength, over the range of all wavelengths to which the observer is sensitive. These spectral responses are commonly referred to as spectral sensitivities or more generally as color matching functions. In the case where the observers are modern digital cameras, an  important step in the color image processing pipeline is the transformation from camera response to objective colorimetric or related quantities, often for each individual unit. In the case where the observers are persons, human color matching functions also map scene spectra to colorimetric quantities, though every person has somewhat different color matching functions. One application of widespread interest is soft proofing where various personnel must ensure colorimetric or appearance matches between images viewed on various wide-gamut displays and prints viewed in controlled light-booths. Since modern displays are often based on narrow-band  primaries, variations in individual human color matching functions and display primary spectra can cause significant color matching errors. For many critical color matching applications, there is widespread interest in how best to determine individual color matching functions. However, direct determination of individual color matching functions in these settings is tedious and impractical for many reasons. In this presentation, a method and results are shown that allow accurate estimation of camera spectral sensitivities based on a few simple measurements and the case is made for extending the method to estimating color matching functions for individual human observers.


Symposium - 9th Annual, Sierra Nevada Chapter of the Society for Neuroscience (pdf)
October 26 • 12:00 pm-5:00pm • Pennington Health and Science 102


Clark Elliott

Clark Elliott, DePaul University (Institute of Applied Artificial Intelligence)
September 22 • 3:00 pm • Joe Crowley Student Union Ballroom A

The brain is primarily a visual-spatial processing device. This has implications for all aspects of human cognition and sensory interpretation. Neurodevelopmental optometry accesses the brain through the retinas, giving us a high-bandwidth mechanism for measuring many of the kinds cognitive brain function that are at the core of what makes us human, as well as for altering such brain configurations by taking advantage of the brain's plastic nature. In this talk I will present a self-reporting case study of a ten-year odyssey with significant brain dysfunction resulting from an mTBI - including "permanent" impairments such as balance difficulties, inability to initiate action, inability to read, to walk, to understand speech, and to sleep normally-but ultimately ending with the truly rare full recovery after brain reconfiguration using neurodevelopmental optometric techniques. In the second part of the talk we will look at the three retinal pathways that neurodevelopmental optometry treats, including center vision, peripheral processing, and a collection of critical non-visual retinal pathways that have been all but ignored until recently, and which are often of great importance in treating brain injuries. We will also discuss a set of clinical qEEG scans showing the changes in brain activity when wearing therapeutic eyeglasses.


Summer 2017

Ipek Oruc

The Adult Face-Diet Revealed: Impact of Daily Face Exposure on the Perception of Faces
Ipek Oruc, University of British Columbia (Ophthalmology & Visual Sciences)
July 21 • 11:00 am • Reynolds School of Journalism 101

Faces are ecologically significant stimuli central to social interaction and communication. Human observers are considered to be experts in face perception due to their remarkable ability to recall great numbers of unique facial identities encountered in a lifetime, their sensitivity to subtle differences that distinguish different identities, and their robustness across significant differences among images of the same identity. A large body of work in the last several decades have investigated limits to this expertise such as recognition of faces of unfamiliar races ("the other-race effect") and faces viewed in the inverted orientation ("the face-inversion effect"). In this talk, I will describe recent results from our group that have suggested that face size, as a proxy for viewing distance, impacts face recognition processing and performance. Furthermore, I will present results from our recent naturalistic observation study that examined adults' daily face exposure, i.e., the adult face-diet. I will compare the adult face-diet to what is known about that of infants and consider these results in light of effects of size on face recognition. I will speculate about origins of these size effects and consider contributions from innate and genetic factors, early exposure during sensitive periods of development, and late exposure during adulthood.


David Field

Selectivity, Hyper-selectivity, and a General Model of the Non-linearities of Neurons in the Visual Pathway
David Field, Cornell University (Psychology)
July 18 • 12:00 pm • Reynolds School of Journalism 101

I will discuss some implications of an approach that attempts to describe the various non-linearities of neurons in the visual pathway using a geometric framework. This  approach will be used to make a distinction between selectivity and hyper-selectivity. Selectivity will be defined in terms of the optimal stimulus of a neuron, while hyper-selectivity will be defined in terms of the falloff in response as one moves away from the optimal stimulus. With this distinction, I show that it is possible for a neuron to be very narrowly tuned (hyper-selective) to a broadband stimulus. We show that hyper-selectivity allows V1 neurons to break the Gabor-Heisenberg localization limit. The general approach will be used to contrast different theories of non-linear processing including sparse coding, gain control, and linear non-linear (LNL) models. Finally, I will show that the approach provides insights into the non-linearities found with overcomplete sparse codes - and argues that sparse coding provides the most parsimonious account of the common non-linearities found in the early visual system.


Vaidehi Natu

Development of Neural Mechanisms Underlying Face Recognition Ability
Vaidehi S. Natu, Stanford University (Psychology)
July 7 • 11:00 am • Reynolds School of Journalism 101

Human face recognition ability is critical for social interactions and communication and it improves from childhood to adulthood. Face-selective regions in the ventral stream increase in size and neural responses to faces, and these developments are related to better face recognition ability. However, it is unknown  whether these developments affect perceptual discriminability of faces and whether they are accompanied with anatomical changes. My talk will describe my research addressing these open questions. First, I will describe results of an fMRI-adaptation study, conducted in children (ages 5-12) and adults (ages 22-28), that was aimed to determine if neural sensitivity to faces develops. Our data shows that neural sensitivity to face identity in face, but not object-selective cortex, develops with age, and this development is correlated with increased perceptual discriminability for faces.  Second, I will present results from a study that investigated neural mechanisms of anatomical development in face-selective regions. While gray matter across ventral temporal cortex thins from age 5 to adulthood, it is unknown if this thinning is due to pruning or increased myelination.  Using novel quantitative MRI and diffusion MRI techniques I will present new evidence that tissue growth and myelination in deep cortical layers, not pruning, is associated with cortical thinning. Together these new data elucidate the functional and anatomical development of face-selective regions from childhood to adulthood and provide an important foundation to understand typical and atypical  development.

Spring 2017

Ramesh Srinivasan

Individual Differences in Attention Filters During Perceptual Decision Making
Ramesh Srinivasan, UC Irvine (Cognitive Sciences)
May 26 • 12:00 pm • Reynolds School of Journalism 101

We have carried out a number of studies of attention filters using the steady-state    visual evoked potential. The SSVEP is a stimulus-specific response to flicker that allows us to simultaneously monitor the brain's response to multiple stimuli. In our studies we have shown that the SSVEP is sensitive to both spatial and feature attention. We have used the SSVEP to measure enhancement or suppression of the brain's response to a visual stimuli as the individual deploys attentional filters. Using a drift-diffusion model of response times, we show that the ability of individuals to suppress task-irrelevant features increases the drift rate (information accrual) while enhancing task-relevant features reduces the diffusion coefficient (variability in information accrual). We have used this technique to investigate individual differences in the ability of individuals to shape their attention filters while performing perceptual decision making. We find that only a minority of individuals apply the optimal filters for a specific task, and can switch filters depending on the task.


Johannes Burge

The Lawful Imprecision of Human Surface Tile Estimation in Natural Scenes
Johannes Burge, University of Pennsylvania (Psychology)
May 5 • 2:00 pm • Reynolds School of Journalism 101

The estimation of local surface orientation (slant and tilt) is fundamental to the recovery of the three-dimensional structure of the environment. Although a great deal of research has been devoted to understanding how visual systems solve this problem, most previous work has focused on performance with artificial stimuli. Here, we study human surface tilt estimation with both natural and artificial stimuli. Natural stimuli are sampled from a stereo-image database of natural scenes with precisely co-registered distance data at each pixel; groundtruth tilt, slant, and distance information is obtained directly from the range data. Artificial planar stimuli, generated in software, are matched to the tilt, slant, distance, and contrast of the natural stimuli. Human observers binocularly viewed natural and artificial surfaces through a small aperture and reported the estimated tilt with a mouse-controlled probe. Human performance in natural scenes is significantly less accurate and more strongly influenced (biased) by the tilt prior than human performance in matched artificial scenes. However, the imprecision of human estimates with natural stimuli is tightly predicted by an ideal observer for the task. The ideal observer reports the Bayes' optimal tilt estimate given three local image cues computed directly from the images. Remarkably, the ideal observer predicts the details of human performance with zero free parameters, including trial-by-trial errors. These similarities suggest that the biased, imprecise patterns of human performance are nevertheless lawful, and that they result from the optimal computations on local areas of natural scenes.


Jason Yeatman

The Neural Circuitry of Skilled Reading
Jason Yeatman, University of Washington (Speech & Hearing Sciences)
April 14 • 1:00 pm • Reynolds School of Journalism 101

Reading requires signals to be rapidly communicated between regions of the cortex that are specialized for processing visual, acoustic, and semantic information. An impairment in any one of these systems, or the white matter tracts connecting them, could cause reading difficulties. In this talk I will introduce new approaches to measuring the developing human brain, and describe how my lab is using these methods to understand the neural basis of skilled reading. By combining quantitative MRI measures of white matter tissue structure with functional MRI and computational modeling we are constructing a detailed description of how the structure and function of the brain's reading circuitry leads to the critical behavior that it supports.


Yi Liu

New Codes Within Genetic Codons: Codon Usage Determiens Protein Structure and Gene Expression Levels
Yi Liu, University of Texas Southwestern Medical Center (Physiology)
April 6 • 4:00 pm • William Raggio Building 2003

Most amino acids are encoded by two to six synonymous codons. Preferential use of certain synonymous codons, a phenomenon called codon usage bias, was found in all genomes but its biological functions are not clear.  We demonstrate that the codon usage bias regulates protein expression and protein function by regulating the speed of translation elongation and co-translational folding. In addition, we uncovered the relationship between codon usage bias and predicted protein structures in fungi and animal systems. Furthermore, we demonstrated that codon usage plays an important role in determining gene expression levels in eukaryotes. Together these results uncovered the existence of unexpected codon usage codes within genetic codons for protein folding and gene expression. 


Steven Buck

Brown is Not Just Dark Yellow
Steven Buck, University of Washington (Psychology)
March 31 • 10:45 am • Ansari Business 106

A long-wavelength target that looks yellowish when bright will turn brownish when made sufficiently darker than its surroundings, a process termed brown induction. Brown and yellow are dark and bright counterparts that can mix perceptually (e.g., butterscotch) but can each exist in pure form, independent of the other. No other basic hue (red, green, or blue) changes color category like this between bright and dark versions. In this talk, I examine perceptual properties of brown and mechanisms of brown induction, especially in comparison to brightness/darkness induction. Finally, I speculate on why this special status of brown may have evolved.


Color Algebras
Jeffrey B. Mulligan, NASA Ames Research Center
March 10 • 12:00 pm • Reynolds School of Journalism 101

A color algebra refers to a system for computing sums and products of colors, analogous to additive and subtractive color mixtures. We would like it to match the well-defined algebra of spectral  functions describing lights and surface reflectances, but an exact correspondence is impossible after the spectra have been projected to a three-dimensional color space, because of metamerism - physically different spectra can produce the same color sensation. Metameric spectra are interchangeable for the purposes of addition, but not multiplication, so any color algebra is necessarily an approximation to physical reality. Nevertheless, because the majority of naturally-occurring spectra are well-behaved (e.g., continuous and slowly-varying), color  algebras can be formulated that are largely accurate and agree well with human intuition.

Here we explore the family of algebras that result from associating each color with a member of a three-dimensional manifold of spectra. This association can be used to construct a color product, defined as the color of the spectrum of the wavelength-wise product of the spectra associated with the two input colors. The choice of the spectral manifold determines the behavior of the resulting system, and certain special subspaces allow computational efficiencies. The resulting systems can be used to improve computer graphic  rendering techniques, and to model various perceptual phenomena such as color constancy.  

Susana Marcos

Seeing Through Manipulated Optics
Susana Marcos, Instituto de Optica, Consejo Superior de Investigaciones Científicas
February 23 • 2:30 pm • Jot Travis 100

Adaptive Optics technology, inherited from astronomy, allows correcting in real time the optical imperfections of the eye, and in general, manipulating the optics and simulating any optical correction. These capabilities allow studying relations between optical and visual performance, investigating the limits of spatial vision, explore vision with the eyes of another individual, investigating adaptation to blur, and testing vision in a patient with different corrections (ophthalmic, contact or intraocular lenses) before they are prescribed to the patient or even manufactured. These technologies have started paving their way to the clinic.  Miniaturization of the technology has allowed a see-through visual simulator of multifocal and monovision corrections that is wearable and suited to help patients experience the world through presbyopic corrections prior to surgery.


Stephen Engel

The Adaptive Brain: Learning to See in Altered Worlds
Stephen A. Engel, University of Minnesota (Psychology)
February 23 • 2:30 pm • Jot Travis 100

Experience with the environment dramatically influences how we act, think, and perceive; understanding the neural plasticity that supports such change is a long-standing goal in cognitive neuroscience. In the visual system, neural function alters dramatically as people adapt to changes in their visual world, such as increases or decreases in brightness or clarity. Most past work on visual adaptation, however, has altered visual input only over the short-term, typically a few minutes. I will present a series of experiments that investigate adaptation over a much longer term, from hours to days to years. We use virtual reality displays to allow subjects to live in, and adapt to, experimentally manipulated visual worlds for long periods of time. We also study the natural adaptation that occurs when people adjust to prescription lenses. Our results suggest that the neural control of adaptation is surprisingly sophisticated, sensitive to the costs and benefits to visual performance, and able to draw upon past experience adapting. These mechanisms may allow vision to perform near optimally in an ever-changing world.


Sara Fabbri

Multivariate Pattern Analysis (MVPA) of Neuroimaging Data
Sara Fabbri, University of Nevada, Reno
February 13 • 10:00 am • Mack Social Science 233

Multivariate Pattern Analysis (MVPA) is one of the most popular techniques used to analyze functional magnetic resonance imaging (fMRI) data. In this seminar, Sara Fabbri (new postdoctoral researcher in the Snow lab) will explain the advantages and disadvantages of using MVPA. She will describe the main steps underlying this analysis approach and provide examples of its use.

Workshop: Using Brain Voyager for fMRI Analysis
Sara Fabbri, University of Nevada, Reno
February 15 • 9:00 am - 12:00 pm • Mack Social Science 300

Dr. Sara Fabbri (postdoctoral researcher in the Snow Lab) will give a hands-on tutorial on how to use BrainVoyager for functional magnetic resonance imaging (fMRI) data analysis. At the end of the workshop, participants will be able to design protocol files, perform anatomical and functional preprocessing steps, co-register anatomical and functional data in 3D space, perform General Linear Model (GLM) analysis, and visualize the results.


Fall 2016

Moritz Durst

Localizing the Source of Dual-Task Costs and Between-Task Interference in PRP-like Tasks
Moritz Durst, University of Tübingen (Psychology)
December 9 • 2:00 pm • Reynolds School of Journalism 101

Dealing with two or more tasks at the same time has become essential for human everyday life. However, a large body of studies conducted over the past decades suggests that dual-tasking significantly reduces performance in all tasks involved. Most of these studies employed the psychological refractory period (PRP) paradigm, where two tasks have to be performed in close succession. The most striking result observed in PRP studies is that response times for the second task increase with increasing temporal overlap with the first task (the PRP effect). One of the most influential models explaining the PRP effect is Pashler's (1994) bottleneck model, which assumes that only one task can be centrally processed at a time, while the other task has to wait to gain access to the bottleneck. Since, however, an increasing number of studies observed an influence of task 2 characteristics on task 1 performance (the backward crosstalk effect; BCE) and vice versa (the forward crosstalk effect, FCE), Pashler's (1994) strictly serial bottleneck model is challenged. Data from several PRP-like Experiments conducted in our lab suggest that the bottleneck does not only cause dual-tasking costs, but is also the locus of between-task interference. A modified version of the bottleneck model and implications for real-life scenarios are discussed.


Ruben Ellinghaus

Mechanisms of Stimulus Discrimination: Temporal Order Effects  and the Internal Reference Model
Ruben Ellinghaus, University of Tübingen (Psychology)
December 9 • 2:00 pm • Reynolds School of Journalism 101

Perceiving differences is a fundamental component of human performance. To investigate the mechanisms underlying this ability, researches have tried to understand the processes that occur when people compare and discriminate two stimuli, e.g. a person indicates which is the brighter of two successively presented light patches. Most theories of stimulus discrimination proposed in the literature are based on Thurstone's original difference model, according to which a person's decision in such a scenario is the result of a comparison between the internal representations of the two stimuli. However, these models fail to account for the observation that discrimination performance is usually better when a constant standard stimulus precedes rather than follows a variable comparison stimulus; a result often obtained in duration discrimination experiments. This so-called Type-B order effect can be explained by a psychological model which assumes that participants compare the second stimulus against an internal standard which is dynamically updated from trial to trial. We will present experiments designed to shed light on the question whether the Type-B order effect is restricted to the domain of duration perception or rather a general phenomenon across a range of modalities and stimulus attributes.


Lewis Forder

The Time Course of the Effect of Color Terms on Color Processing
Lewis Forder, University of Wisconsin-Madison (Psychology)
December 2 • 11:30 am • Reynolds School of Journalism 101

This talk presents the data from a series of studies designed to examine how we name colors affects the visual processing of color. The talk will focus on the time course of chromatic processing by reporting when color terms affect earlier or later stages of chromatic processing. This topic relates to the broader debate about whether language affects the way we perceive the world (i.e., the theory of linguistic relativity). I'll present recent work from three studies that used the event-related potential (ERP) method to obtain high-resolution information about the timing of the neural activity in the brain elicited in response to different colors. I'll also present data from a set of behavioral experiments that used continuous flash suppression to directly affect participants' ability to process and perceive color.


Don Hoffman

The Interface Theory of Perception
Don Hoffman, UC Irvine
August 31 • 1:00 pm • Schulich Lecture Hall 2

If I have a visual experience that I describe as a red tomato a meter away, and if I am sober and otherwise unimpaired, then I am inclined to believe that there is in fact a red tomato a meter away, and that it will continue to exist even if I close my eyes or even if I cease to exist. In short, I'm inclined to believe that my perceptions are, in the normal case, veridical-that they        accurately represent some aspects of the objective environment. But is my belief supported by our best science? In particular: Does evolution by natural selection favor veridical perceptions? Many scientists and philosophers of perception claim that it does. But this claim, though it is influential and accords with our intuitions, has not been adequately tested. In this talk I present a new theorem: Veridical perceptions are never more fit than non-veridical perceptions which are simply tuned to the relevant fitness functions. This entails that perception is almost surely not a window on reality; it is more like a windows interface on your laptop. I discuss this interface theory of perception and its implications for one of the most puzzling unsolved problems in science: the relationship between brain activity and conscious experiences.


Summer 2016

Benjamin Balas

Small Towns, Visual Ecology, and Face Recognition
Benjamin J. Balas, North Dakota State University
July 22 • 12:00 pm • Reynolds School of Journalism 101

We've known for a long time that face recognition depends on visual experience: Observers tend to find faces belonging to other-race, other-age, and other-species categories hard to process if they haven't had much exposure to them. Besides the variability across observers in terms of the categories of faces they see, there is also substantial variability in how many faces observers are exposed to overall (especially in places like North Dakota that have many depopulated regions). In this talk, I'll describe recent work from our lab examining the impact of growing up in a small community of faces, and discuss how the sheer number of people you see affects both your memory for faces and the way your brain responds to them. I'll also talk about ongoing work in which we're trying to work out if limited face experience also affects the manner in which you process faces, and future plans to study individual differences in face processing that result from differential experience.


Anya Hurlbert

Seeing Colour Constancy in a Contemporary Light
Anya Hurlbert, Newcastle University (Neuroscience)
July 1 • 11:30 am • Mathewson-IGT Knowledge Center 107


Christopher Tyler 

1) Color Processing in Peripheral Vision: Basic and Clinical Implications
Christopher W. Tyler, Smith-Kettlewell Brain Imaging Center
June 30 • 4:00 pm • Reynolds School of Journalism 101

Even basic properties of peripheral vision are widely misunderstood, such as the relative role of rods and cones in peripheral relative to foveal color and motion processing. The stage will be set by correcting these misconceptions and considering their implications for the clinical treatment of retinal diseases. A primary feature of peripheral processing is its rapidity of relative to the fovea, which is incompatible with common conceptions of a rod-dominated periphery. The transduction mechanisms within each photoreceptor type giving rise to the respective photoreceptor dynamics will be analyzed.

The recent discovery of a fifth retinal photopigment, melanopsin, in a subset of the primate retinal ganglion cells operating throughout the periphery is generating current interest in their light responsivity. The electroretinogram (ERG) is a powerful technique for the analysis of human retinal function. A new approach will be presented to the analysis of the role of these melanopsin-sensitive retinal ganglion cells in the human ERG and its disruption in head trauma and photophobia.

2) Novel Insights in the Leonardo/Michelangelo Rivalry
Christopher W. Tyler, Smith-Kettlewell Brain Imaging Center
July 1 • 12:00 pm • Mathewson-IGT Knowledge Center 107

The two greatest artistic figures of the Italian Renaissance, Leonardo da Vinci and Michelangelo Buonarroti, are widely considered to have a bitter lifetime rivalry. In fact, from the earliest work attributed to Michelangelo (now in Texas) to his later career, there are many interlinkages between the styles and subject matter of their works, suggesting that their relationship must have been more of a professional rivalry marked by mutual respect for each other's supreme talent.

An underappreciated aspect of the two artists' work is their representation of the female form, which I will suggest was brought out in the early years of the C16th by the mistresses of Cardinal Giuliano de Medici (brother of the Pope) in Florence, and by the young aristocrat, Vittoria Colonna, in Rome. Covert portraits of these appealing figures provide further linkages between the two artists, and suggest the deeper implications of the Mona Lisa smile.


Spring 2016

Shrikant Bharadwaj

Impact of Distorted Optics on Spatial and Depth Vision - Lessons from Human Disease Models
Shrikant Bharadwaj, LV Prasad Eye Institute (Optometry and Vision Sciences)
May 6 • 12:30 pm • Reynolds School of Journalism 101

The process of "seeing" involves the capture of light photons by the anatomical substrate called "eye" and processing these photons using a black box called "brain". The output of this black box manifests itself as perception including form, depth, motion and color and as motor actions including eye movements of various sorts. As vision scientists, our goal is to understand the functioning of the black box by systematically studying perception and motor action in response to known experimental manipulations. As clinicians, we use the output of the black box as a measure of how well the individual is seeing and interacting with the world around them. Given that the "eye" forms the substrate for perception by the "brain", it is conceivable that properties of the "eye" - optics, in this case - will have a significant impact on the quality of perception. The clinic presents several interesting scenarios where the optics of the eye are distorted but their impact on perception is not systematically understood thus far. In my talk, I will use two examples to illustrate this issue. In both cases, the eye is heavily distorted and there may also be significant interocular difference in distortions depending on disease presentation. What impact do these distortions have on spatial and depth perception and how does modifying the optics using rigid contact lenses impact perception will be the focus of my presentation.


Megan Tillman

Scotopic Vision and Aging
Megan Tillman, UC Davis (Neuroscience)
May 4 • 3:00 pm • Reynolds School of Journalism 101

Poor night vision is a common complaint amongst the elderly, however, the cause of this impairment is not straightforward. Many factors are thought to contribute to the age-related decline in scotopic sensitivity, including reduced pupil size, increased optical density of the ocular media, rod photoreceptor death, and delayed photopigment kinetics. The goal of the current research was to measure the retinal  activity of normal younger and older adults using the electroretinogram (ERG) and control for optical changes (i.e., pupil size and ocular media density) so as to determine a neural contribution for the age-related scotopic vision loss. Both full-field and multifocal ERGs were recorded in order to understand the global and topographical changes, respectively, in the rod-mediated retina of older adults.


Patrick Emery

Synchronization of Circadian Clocks to Daily Environmental Cycles
Patrick Emery, University of Massachusetts Medical School (Neurobiology)
April 21 • 4:00 pm • Davidson Math & Science 105

Circadian clocks play the critical role of optimizing most bodily functions - from basic metabolism to complex behaviors - with the time of day. These timekeepers need to be synchronized with the environment to be helpful.  Therefore, they are able to respond to multiple inputs, such as light and temperature. Interestingly, in Drosophila, these two environmental cues can be detected in a cell-autonomous manner. The seminar will focus on these cell-autonomous photic and thermal sensing mechanisms, and how they converge on a single pacemaker protein, TIMELESS, to entrain together circadian clocks. 


 

Robert Shapley

Roles of Cortical Single- and Double-Opponent Cells in Color Vision
Robert Shapley, New York University (Center for Neural Science)
April 15 • 3:00 pm • William J. Raggio Building 2030

Color and form interact in visual perception. We will consider the neural mechanisms in the visual cortex that are the basis for color-form interactions.


Ione Fine

Pulse Trains to Percepts: The Challenge of Creating a Perceptually Intelligible World with Sight Recovery Techniques
Ione Fine, University of Washington (Psychology)
April 8 • 11:30 am • Reynolds School of Journalism 101

An extraordinary variety of sight recovery therapies are either about to begin clinical trials, have begun clinical trials, or are currently being implanted in patients. However, as yet we have little insight into the perceptual experience likely to be produced by these implants. This review focuses on methodologies, such as optogenetics, small molecule photoswitches and electrical prostheses, which use artificial stimulation of the retina to elicit percepts. For each of these technologies, the interplay between the stimulating technology and the underlying neurophysiology is likely to result in distortions of the perceptual experience. Here, we simulate some of these potential distortions and discuss how they might be minimized either through changes in the encoding model or through cortical plasticity.


Vicki Volbrecht

Color Vision in the Peripheral Retina
Vicki Volbrecht, Colorado State University (Psychology)
April 1 • 3:00 pm • William J. Raggio Building 2030

The study of peripheral color vision presents challenges due to the presence of both rods and cones in the peripheral retina. As many studies have shown, color perception in the peripheral retina differs from color perception in the fovea; and color perception also varies across the peripheral retina with retinal eccentricity and location. These differences are not surprising due to differences in the retinal mosaic with retinal eccentricity and location. Despite these differences, though, when viewing objects in everyday life that cover both the foveal and peripheral retina, we perceive a uniform color percept. Similarly, when an object is viewed binocularly along the horizontal meridian in the peripheral retina, the color percept is uniform even though the object falls on the temporal retina of one eye and the nasal retina of the other eye; viewed monocularly, the color of the object may differ if it falls on the temporal or nasal retina. Recently, our laboratory has been investigating some of these issues as it relates to peripheral color vision and uniform color percepts across the differing retinal mosaic.


Benoit Bruneau

Transcriptional Regulation of Heart Development and Chromatin Structure
Benoit Bruneau, UCSF (Gladstone Institute of Cardiovascular Disease)
March 10 • 4:00 pm • William J. Raggio Building 3005

Complex networks of transcription factors regulate cardiac cell fate and morphogenesis, and dominant mutations in transcription factor genes lead to most instances of inherited congenital heart defects (CHDs). The mechanisms underlying CHDs that result from these mutations is not known, but regulation of gene expression within a relatively narrow developmental window is clearly essential for normal cardiac morphogenesis. We have detailed the interactions between CHD-associated transcription factors, their interdependence in regulating cardiac gene expression and morphogenesis, and their function in establishing early cardiac lineage  boundaries that are disrupted in CHD. We have also delineated an essential role by CTCF in regulating genome-wide three-dimensional chromatin organization.


Stacey Tovino

Health Law Implications of Advances in Neuroscience, Including Neuroimaging
Stacey Tovino, University of Nevada, Las Vegas (William S. Boyd School of Law)
March 4 • 3:00 pm • Mathewson-IGT Knowledge Center 124

Within the overlapping fields of neurolaw and neuroethics, scholars have given significant attention to the implications of advances in neuroscience for issues in criminal law, criminal   procedure, constitutional law, law and religion, tort law, evidence law, confidentiality and privacy law, protection of human subjects, and even the regulation of neuroscience-based    technologies. Less attention has been paid, however, to the implications of advances in neuroscience for more traditional civil and regulatory health law issues. In this presentation, I will examine the ways in which neuroscience impacts four different areas within civil and regulatory health law, including mental health parity law and mandatory mental health and substance use disorder law, public and private disability benefit law, disability discrimination law, and professional discipline. In some areas, especially mental health parity law and  mandatory mental health and substance use disorder benefit law, advances in neuroscience have positively impacted health insurance coverage. In other areas, including disability discrimination law, the impact has not been as significant.


Steven Shevell

Perceptual Resolution of Color with Ambiguous Chromatic Neural Representations
Steven Shevell, University of Chicago (Psychology)
Feb 26 • 11:30 am • Reynolds School of Journalism 101

Our ability to see in the natural world depends on the neural representations of objects. Signals sent from the eye to the brain are the basis for what we see, but these signals must be transformed from the image-based representation of light in the eye to an object-based representation of edges and surfaces. A challenge for understanding this transformation is the ambiguous nature of the image-based representation from the eye. Textbooks examples demonstrate this ambiguity using a constant retinal image that causes fluctuation between two different bi-stable percepts (as in the face-or-vase illusion, or a Necker cube that switches between two orientations). Bi-stable colors also can be experienced with ambiguous chromatic neural representations. Recent experiments (1) generate ambiguous chromatic neural representations that result in perceptual bi-stability alternating between two colors,  (2) reveal that two or more distinct objects in view, each with its own ambiguous chromatic representation, often have the same color, which reveals that grouping is a key aspect of resolving chromatic ambiguity, and (3) show that grouping survives even with unequal temporal properties among the separate ambiguous representations, as predicted by a model of binocularly integrated visual competition.


Alice O'Toole

Understanding Person Recognition; Psychological, Computational, & Neural Perspectives
Alice O'Toole, University of Texas at Dallas (School of Behavioral and Brain Sciences)
Feb 19 • 11:30 am • Reynolds School of Journalism 101

Over the past decade, face recognition algorithms have shown impressive gains in performance, operating under increasingly unconstrained imaging conditions. It is now commonplace to benchmark the performance of face recognition algorithms against humans and to find conditions under which the machines perform more accurately than humans. I will present a synopsis of human-machine comparisons that we have conducted over the past decade, in conjunction with U.S. Government-sponsored competitions for computer-based face recognition systems. From these comparisons, we have learned much about human face recognition, and even more about person recognition. These experiments have led us to examine the neural responses in face- and body-selective cortical areas during person recognition in natural viewing conditions.  I will describe the neuroimaging findings and conclude that human expertise for "face recognition" is better understood in the context of the whole person in motion-where the body and gait provide valuable identity information that supplements the face in poor viewing conditions.


Delwin Lindsey

Color Naming, Color Communication and the Evolution of Basic Color Terms
Delwin Lindsey, Ohio State University (Psychology)
Feb 19 • 12:30 pm • Reynolds School of Journalism 101

The study of the language of color is implicitly based on the existence of a shared mental representation of color within a culture. Berlin & Kay (1969) proposed that the great cross-cultural diversity in color naming occurs because different languages are at different stages along a constrained trajectory of color term evolution. However, most pre-industrial societies show striking individual differences in color naming (Lindsey & Brown, 2006, 2009). We argue that within-language diversity is not entirely lexical noise. Rather, it suggests a fundamental mechanism for color lexicon change. First, the diverse color categories---including some that do not conform to classical universal categories---observed within one society are often similar to those seen in people living in distant societies, on different continents, and speaking completely unrelated languages. Second, within-culture consensus is often low, either due to synonymy or to variation in the number and/or structure of color categories. Next, we introduce an information-theoretic analysis based on mutual information, and analyze within-culture communication efficiency across cultures. Color communication in Hadzane, Somali, and English provides insight into the structure of the lexical signals and noise in world languages (Lindsey et al., 2015). These three lines of evidence suggest a new view of color term evolution. We argue that modern lexicons evolved, under the guidance of   universal perceptual constraints, from initially sparse (Levinson, 2000), distributed representations that mediate color   communication poorly, to more complete representations, with high consensus color naming systems capable of mediating better color communication within the language community.


Angela Brown

Critical Immaturities Limiting Infant Visual Sensitivity
Angela Brown, Ohio State University (Optometry)
Feb 19 • 1:00 pm • Reynolds School of Journalism 101

The vision of the human infant is remarkably immature: visual sensitivity to light is low, contrast sensitivity is poor, visual acuity is poor, color vision is poor, vernier acuity is poor, and stereopsis is probably not possible until the infant is several months old. The visual system of the human infant is known to be biologically immature as well: the photoreceptors, especially the foveal cones, are morphologically immature, and myelination of the ascending visual pathway is not complete at birth. Also, the infant is cognitively immature, for example the infant attention span is short. In this talk, I will unite these immaturities into a single picture of the infant visual system: the main critical immaturity that limits infant visual performance on these psychophysical tasks is a large amount of contrast-like noise that is added linearly to the visual signal, after the sites of visual light adaptation, but before the sites of visual contrast adaptation, and likely in the retina or ascending visual pathway.


Fall 2015

Steve Luck

Neural Mechanisms of Distractor Suppression
Steve Luck (UC Davis, Center for Mind & Brain)
December 10 • 12:00 pm • Joe Crowley Student Union Theater

The natural visual input is enormously complex, and mechanisms of attention are used to focus neural processing on a subset of the input at any given time. But how does the brain decide which inputs to process and which to ignore? Some researchers have proposed that bottom-up salience is initially used to control attention, with top-down control emerging gradually over time. Others have proposed that top-down, prefrontal control mechanisms are completely responsible for the guidance of attention, with no role for bottom-up salience. In this talk, I will describe recent electrophysiological and psychophysical studies that support a hybrid theory, in which bottom-up salience signals are present but can be actively suppressed by a specialized neural mechanism before they can capture attention. This same mechanism also appears to be used to terminate the orienting of attention to an object after perceptual processing of that object is complete.


Joanna Chiu

Interplay between posttranslational modifications regulate animal circadian clock
Joanna Chiu (UC Davis, Department of Entomology and Nematology)
December 10 • 3:00 pm • Davidson Math and Science Center 103

Circadian clocks regulate molecular oscillations that manifest into physiological and behavioral rhythms in all kingdoms of life. A long-term goal of my laboratory is to dissect the molecular network and cellular mechanisms that control the circadian oscillator in animals, and investigate how this molecular oscillator interacts with the environment and cellular metabolism to drive rhythms of physiology and behavior. Given the similarities in design principle of circadian oscillators across kingdoms, the knowledge gained from studies using Drosophila melanogaster as a model will lead to a better universal understanding of circadian oscillator function and properties. In this presentation, I will discuss the contribution of protein posttranslational modifications (PTMs) in regulating circadian rhythm by focusing on analyzing PTMs of key transcription factors such as PERIOD (PER), a key biochemical timer of clockwork. My laboratory has recently optimized the PTM profiling of circadian proteins in vivo using affinity purification and mass spectrometry. This breakthrough allows us to follow the temporal multi-site PTM program of PER and other clock proteins in vivo at physiological conditions throughout the circadian day in a high throughput and quantitative manner, and sets the stage for understanding how the PTM programs of clock proteins, and hence clock function, are modulated by genetic, physiological, and environmental factors.


Pablo de Gracia

Optimizing monovision and multifocal corrections
Pablo de Gracia (Barrow Neurological Institute)
December 4 • 11:30 am • Reynolds School of Journalism 101

In this talk we will explain how multiple-zone multifocal designs can be used to further optimize the optical performance of modified monovision corrections. Combinations of bifocal and trifocal designs lead to higher values of optical quality (5%) and through-focus performance (35%) than designs with spherical aberration. For any given amount of optical disparity that the presbyopic patient feels comfortable with, there is a combination of a monofocal and a bi/trifocal design that offers better optical performance than a design with spherical aberration. Conventional monovision can be improved by using the bifocal and trifocal designs that can be implemented in laser in situ keratomileusis (LASIK) equipment and will soon be available to the practitioner in the form of new multifocal contact and intraocular lenses.


David Peterzell

Discovering Sensory Processes Using Individual Differences: A Review and Factor Analytic Manifesto
David Peterzell (John F. Kennedy University, College of Graduate and Professional Studies - Clinical Psychology)
November 20 • 11:30 am • Reynolds School of Journalism 101

In the last century, many vision scientists have considered individual variability in data to be "error," thus overlooking a trove of systematic variability that reveals sensory, cognitive, neural and genetic processes. This "manifesto" coincides with both long-neglected and recent prescriptions of a covariance-based methodology for vision (Thurstone, 1944; Pickford, 1951; Peterzell, Werner & Kaplan, 1993; Peterzell & Teller, 1996; Kosslyn et al. 2002; Wilmer, 2008; Wilmer et al. 2012; de-Wit & Wagemans, 2015). But the emphasis here is on using small samples to both discover and confirm characteristics of visual processes, and on reanalyzing archival data. This presentation reviews 220 years of sporadic and often neglected research on normal individual variability in vision (including 25+ years of my own research). It reviews how others and I have harvested covariance to a) develop computational models of structures and processes underlying human and animal vision, b) analyze and delineate the developing visual system, c) compare typical and abnormal visual systems, d) relate visual behavior, anatomy, physiology and molecular biology, e) interrelate sensory processes and cognitive performance, and f) develop efficient (non-redundant) tests. Some examples are from my factor-analytic research on spatiotemporal, chromatic, stereoscopic, and attentional processing.


Jack Gallant

Mapping, Modeling and Decoding the Human Brain Under Naturalistic Conditions
Jack Gallant, (University of California, Berkeley, Helen Wills Neuroscience Institute)
November 13 • 3:00 pm • Jot Travis 100

One important goal of Psychology and Neuroscience is to understand the mental and neural basis of natural behavior. This is a challenging problem because natural behavior is difficult to parameterize and measure. Furthermore, natural behavior often involves many different perceptual, motor and cognitive systems that are distributed broadly across the brain. Over the past 10 years my laboratory has developed a new approach to functional brain mapping that recovers detailed information about the cortical maps mediating natural behavior. We first use functional MRI to measure brain activity while participants perform natural tasks such as watching movies or listening to stories. We then model brain activity using quantitative computational models derived from computational neuroscience or machine learning. Interpretation of the fit models reveals how many different kinds of sensory and cognitive information are represented in systematic maps distributed across the cerebral cortex. Our results show that even simple natural behaviors involve dozens or hundreds of distinct functional gradients and areas; that these are organized similarly in the brains of different individuals; and that top-down mechanisms such as attention can change these maps on a very short time scale. These statistical modeling tools provide powerful new methods for mapping the representation of many different perceptual and cognitive processes across the human brain, and for decoding brain activity.


G. Christopher Stecker

Spatial hearing and the brain: Assembling binaural information to understand auditory space
G. Christopher Stecker, (Vanderbilt University School of Medicine, Department of Hearing and Speech Sciences)
September 25 • 11:00 am • Jot Travis 100

Spatial hearing by human listeners requires access to auditory spatial cues, including interaural time differences (ITD) and interaural level differences (ILD), in the sound arriving at the two ears. For real sounds, these cues are distributed across time and frequency, and often distorted in complex ways by echoes and reverberation. Nevertheless, young normal-hearing listeners are remarkably good at localizing sounds and understanding the auditory scene, even in acoustically complex environments. In this talk, we will discuss (1) how listeners weight and combine auditory spatial cues across cue type, time, and frequency; (2) how that ability relates to the consequences of reverberation, hearing loss, and hearing-aid technology on spatial hearing; and (3) what neuroimaging with fMRI can tell us about the neural mechanisms that process auditory spatial cues and represent the auditory scene.


SPRING 2015

Bruno Rossion

Understanding face perception with fast periodic visual stimulation
Bruno Rossion (Catholic University of Louvain, Belgium, Psychological Sciences Research Institute)
May 26 • 1:00 pm • Reynolds School of Journalism 101

When the human brain is stimulated at a rapid periodic frequency rate, it synchronizes its activity exactly to this frequency, leading to periodic responses recorded by the electroencephalogram (EEG). In vision, periodic stimulation has been used essentially to investigate low-level processes and attention, and has been recently extended to understand high-level visual processes, in particular face perception (Rossion & Boremanse, 2011). In this presentation, I will summarize a series of studies carried out over the last few years that illustrate the strengths of this approach: the objective (i.e., exactly at the experimentally-defined frequency rate) definition of neural activity related to face perception, the very high signal-to-noise ratio, the independence from explicit behavioral responses, and the identification of perceptual integration markers. Overall, fast periodic visual stimulation is a highly valuable approach to understand the sensitivity to visual features of complex visual stimuli and their integration, in particular for individual faces, and in populations presenting a lower sensitivity of their brain responses and/or the need for rapid and objective assessment without behavioral explicit responses (e.g., infants and children, clinical populations, animals).


Olivier Collignon

Brain plasticity underlying sight deprivation and restoration: A complex interplay
Olivier Collignon (University of Trento, Italy, Center for Mind/Brain Sciences)
May 22 • 11:00 am • Reynolds School of Journalism 101

Neuroimaging studies involving blind individuals have the potential to shed new light on the old ‘nature versus nurture' debate on brain development: while the recruitment of occipital (visual) regions by non-visual inputs in blind individuals highlights the ability of the brain to remodel itself due to experience (nurture influence), the observation of specialized cognitive modules in the reorganized occipital cortex of the blinds, similar to those observed in the sighted, highlights the intrinsic constraints imposed to such plasticity (nature influence). In the first part of my talk, I will present novel findings demonstrating how early blindness induces large-scale imbalance between the sensory systems involved in the processing of auditory motion.

These reorganizations in the occipital cortex of blind individuals raise crucial challenges for sight-restoration. Recently, we had the unique opportunity to track the behavioral and neurophysiological changes taking place in the occipital cortex of an early and severely visually impaired patient before as well as 1.5 and 7 months after sight restoration. An in-deep study of this exceptional patient highlighted the dynamic nature of the occipital cortex facing visual deprivation and restoration. Finally, I will present some data demonstrating that even a short and transient period of visual deprivation (only few weeks) during the early sensitive period of brain development leads to enduring large-scale crossmodal reorganization of the brain circuitry typically dedicated to vision, even years after visual inputs.


Charlotte DiStefano

Understanding kids who don’t talk: Using EEG to measure language in minimally verbal children with ASD
Charlotte DiStefano (UCLA, Center for Autism Research and Treatment)
May 8 • 4:00 pm • Mathewson-IGT Knowledge Center 107

Approximately 30% of children with autism spectrum disorder (ASD) remain minimally verbal past early childhood. These children may have no language at all, or may use a small set of words and fixed phrases in limited contexts. Although very impaired in expressive language, minimally verbal children with ASD may present with significant heterogeneity in receptive language and other cognitive skills. Accurately measuring these skills presents a challenge, due to the limitations in how well these children are able to understand and comply with assessment instructions. Recently, there has been increased interest in using passive, or implicit measures when studying such populations, since they do not require the child to make overt responses or even understand the task. One such measure is electroencephalography (EEG), which records electrical activity within the brain and provides information about processing in real-time. EEG recordings can also be used to evaluate event related potentials (ERP), which are measurements of the brain's electrical activity in response to a specific stimulus (such as a word or a picture). We can then use this information to understand more about an individual's cognitive development, improving our ability to develop targeted interventions. We have so far collected EEG and ERP measures in minimally verbal children with ASD across a variety of domains, including resting state, visual statistical learning, face processing, word segmentation and lexical processing. This data, along with careful behavioral assessments, have led us to a greater understanding of the heterogeneity within the minimally verbal group, as well as how they differ from verbal children with ASD and typically developing children.


Noelle L'Etoile

Endogenous RNAi and behavior in C. elegans
Noelle L’Etoile (UCSF, Department of Cell and Tissue Biology)
April 30 • 4:00 pm • Ansari Business 106

My group's goal is to understand how molecules, cells, circuits and the physiology of an intact organism work together to produce learned and inherited behaviors. We combine the powerful genetics and accessible cell biology with the robust behaviors of the nematode C. elegans to approach this question. I will discuss our findings that within the sensory neuron small endogenous RNAs (endo-siRNAs) provide some of the plasticity of the olfactory response. The biogenesis of these small RNAs is as mysterious as their regulation by experience and I will describe our attempts to understand each process. Within the circuit, I will touch upon how we are examining synaptic remodeling in development and in the adult animal as it adapts to novel stimuli and metabolic stress. The optical transparency of C. elegans provides a unique window into the real time dynamics of circuits. To take advantage of this, we are developing visual reporters for simultaneous imaging of several aspects of neuronal physiology: calcium transients, pH fluctuations, cGMP and cAMP fluxes and chromatin dynamics within the entire nervous system of the living, behaving animal. I will also present some of our recent findings that may link experience to inherited behaviors.


Nancy Xu

New Tools for Real-time Imaging of Single Live Cells 
Nancy Xu (Old Dominion University, Chemistry and Biochemistry)
April 30  •  1:00 pm  •  Davidson Math and Science 105

Current technologies are unable to real-time detect, image and study multiple types of molecules in single live cells with sufficient spatial and temporal resolutions and over an extended period of time. To better understand the cellular function in real-time, we have developed several new ultrasensitive nanobiotechnologies, including far-field photostable-optical-nanoscopy (PHOTON), photostable single-molecule-nanoparticle-optical-biosensors (SMNOBS) and single nanoparticle spectroscopy for mapping of dynamic cascades of membrane transport and signaling transduction pathways of single live cells in real time at single molecule and nanometer resolutions. We have demonstrated that these powerful new tools can be used to quantitatively image single molecules and study their functions in single live cells with superior temporal and spatial resolutions and to address a wide range of crucial biochemical and biomedical questions. The research results and experimental designs will be discussed in this seminar.


Libby Huber

Auditory perception and cortical plasticity after long-term blindness
Libby Huber (University of Washington, Vision and Cognition Group)
March 24 • 1:00 pm • Reynolds School of Journalism 101

Early onset blindness is associated with enhanced auditory abilities, as well as plasticity within auditory and occipital cortex. In particular, pitch discrimination is found to be superior among early-blind individuals, although the neural basis of this enhancement is unclear. In this talk, I will present recent work suggesting that blindness results in an increased representation of behaviorally relevant acoustic frequencies within both auditory and occipital cortex. Moreover, we find that individual differences in pitch discrimination performance can be predicted from the cortical data. The functional significance of group and individual level differences in frequency representation will be discussed, along with the relative importance of auditory and occipital cortical responses for acoustic frequency discrimination after long-term blindness.


Talia Retter

At face value: An introduction to fast periodic visual stimulation
Talia Retter (Catholic University of Louvain, Belgium, Psychological Sciences Research Institute)
March 12 • 1:00 pm • Reynolds School of Journalism 101

Fast periodic visual stimulation (FPVS) is a technique in which the presentation of stimuli at a constant rate elicits a neural response at that frequency, typically recorded with electroencephalogram (EEG). A Fourier Transform is applied to the EEG data to objectively characterize this response at a pre-determined frequency of interest. Although this technique has traditionally been applied to study low-level vision, it has recently been developed to implicitly measure high-level processes in the field of face perception. In the Face Categorization Lab at the University of Louvain, FPVS has been used to study individualization of facial identities (e.g., Liu-Shuang et al., 2014) and the discrimination of faces from other object categories (e.g., Rossion et al., 2015). During my time in this lab, I have tested experiments using FPVS regarding: 1) category-selective responses to natural face and non-face images; 2) examining the spatio-temporal dynamics of face-selective responses; and 3) adaptation to a specific facial identity. The results of these studies will be discussed both in light of their implications for our understanding of face perception and, more generally, as examples of the richness of this methodology for understanding high-level vision in humans.


Lara Krisst

Introspections about Visual Sensory Memory During the Classic Sperling Iconic Memory Task
Lara Krisst (San Francisco State University, Mind, Brain, & Behavior)
March 12 • 10:00 am • Reynolds School of Journalism 101

Visual sensory memory (or ‘iconic memory') is a fleeting form of memory that has been investigated by the classic Sperling (1960) iconic memory task. Sperling demonstrated that ‘more is seen than can be remembered,' or that more information is available to observers than what they can normally report about. Sperling established the distinction between ‘whole report' (response to a stimulus set of 12 letters) and what subjects report when cued (to a row of letters in the set), or ‘partial report.' In the whole report condition participants were able to report only between three and five of the 12 letters presented, however, participants' high accuracies across partial report trials revealed that, on a given trial, the information about the complete stimulus set is held in a sensory store momentarily. This finding demonstrates subjects were able to perceive more than they were originally able to report. In a new variant of the paradigm, we investigated participants' trial-by-trial introspections about what participants are, and are not, conscious of regarding these fleeting memories. Consistent with Sperling's findings, data suggest that participants believe that they could report, identify, or remember only a subset of items (~ 4 items). Further investigation with this paradigm, including examination of the neural correlates of the introspective process, may shed light on the neural correlates of visual consciousness.


Martha Merrow

The times of their lives: Developmental and circadian timing in C. elegans
Martha Merrow (Ludwig Maximilian University of Munich, Institute of Medical Psychology)
March 10 •  4:00 pm • Davidson Math and Science 102

Living organisms have developed a multitude of biological time-keeping mechanisms - from developmental to circadian (daily) clocks. Martha Merrow has been on the forefront of understanding the basic properties and molecular aspects of how the circadian clock synchronizes with environmental cues - from worms to yeast to fungi to humans. In addition to circadian clocks, she has been studying developmental clocks in worms and recently developed a new method to measure timing of larval development, which could be used to measure sleep-like properties in worms. She started working on biological clocks as a Post-Doctoral Fellow at the Darmouth Medical School, and is currently a Full Professor and Teaching Chair in the Institute of Medical Psychology at the Ludwig-Maximilians-Universität in Munich, Germany. Beyond her teaching and research, Martha also works on developing scientific networks for chronobiologists and for women in science.


John Serences

Attentional gain versus efficient selection: Evidence from human electroencephalography  
John Serences (UC San Diego, Psychology)
March 5 • 4:00 pm • Ansari Business 106

Selective attention has been postulated to speed perceptual decision-making via one of three mechanisms: enhancing early sensory responses, reducing sensory noise, and improving the efficiency with which sensory information is read-out by sensorimotor and decision mechanisms (efficient selection). Here we use a combination of visual psychophysics and electroencephalography (EEG) to test these competing accounts. We show that focused attention primarily enhances the response gain of early and late stimulus-evoked potentials that peak in the contralateral posterior-occipital and central posterior electrodes, respectively. In contrast with previous reports that used fMRI, a simple model demonstrates that response enhancement alone can sufficiently account for attention-induced changes in behavior even in the absence of efficient selection. These results suggest that spatial attention facilitates perceptual decision-making primarily by increasing the response gain of stimulus-evoked responses.


April Schweinhart

Changing what you see changes how you see: Analyzing the plasticity of broadband orientation perception
April Schweinhart (University of Louisville, Psychological and Brain Sciences)
February 25 • 11:00 am • Reynolds School of Journalism 101

Schweinhart's work using augmented reality shows that changing the way certain features are presented in an observer's environment triggers predictable changes in subsequent perception. Traditionally, vision science examined the perception of stimulus features in isolation. More recently, researchers have begun to investigate the perception of such features in context. Consider, for example, the perception of oriented structure: incoming visual signals are processed by neurons tuned in both size and orientation at the earliest cortical levels of the visual hierarchy. Interestingly, the distribution of orientations in the environment is anti-correlated with human visual perception. Though this correspondence between typical natural scene content and visual processing is compelling, until recently the relationships between visual encoding and natural scene regularities were necessarily limited to being static and correlational. This work takes into account the recent experience of the observer to determine the plasticity of perceptual biases related to environmental regularities.


Fall 2014

Karl Deisseroth Portrait

Optical deconstruction of fully-assembled biological systems
Karl Deisseroth (Stanford, Bioengineering)
October 23 • 7:00 pm • Davidson Math and Science 110

The journal Nature dubbed Karl Deisseroth "Method Man" for two groundbreaking techniques developed in his lab, Optogenetics and CLARITY. Both are game changers in the neuroscience world, revolutionizing the way scientists can study the brain. Optogenetics gives scientists the ability to turn neural activity on and off with light-driven switches. CLARITY turns a brain into a clear Jell-O like structure with all neurons intact, giving scientists an unprecedented view of the brain's molecules and cells. These tools allow neuroscientists to address fundamental questions about dynamic changes in brain structure and function. Deisseroth's group applies these strategies to better understand the biological basis for neurological and psychiatric diseases, and how the brain responds to learning, injury, and seizures.
Deisseroth serves on President Obama's BRAIN Initiative advisory committee. He is a member of the National Academy of Sciences and the recipient of dozens of prestigious national and international science awards.


Charlie Chubb Portrait

Cuttlefish Camouflage 
Charlie Chubb (UC Irvine, Cognitive Sciences)
August 29 • 3:00 pm • Ansari Business 106

Cephalopods (squid, octopus and cuttlefish) have exceptional neurophysiologically controlled skin that can rapid change color, enabling them to achieve dynamic crypsis in a wide range of habitats. Chubb shows the range of camouflage patterns that cuttlefish (Sepia Officinalis) produces and discusses some of the remarkably subtle strategies these patterns use to elude detection. The animals' patterning responses are controlled by the visual input they receive which are sensitive to the visual granularity of the stimulus substrate relative to their own body size.
A deep mystery remains unresolved: cuttlefish have skin that enables them to produce four dimensions of chromatic variation which they use to achieve masterful matches to the colors of substrates in their natural environment. However, cuttlefish have only a single retinal photopigment; in other words, they are colorblind.


Spring 2014

Shinobu Kitayama

Cultural Neuroscience: Current Evidence and Future Prospect  
Shinobu Kitayama (University of Michigan, Psychology) 
April 18 • 3:00 pm • Mathewson-IGT Knowledge Center 124 (Wells Fargo Auditorium)

Cultural neuroscience is an emerging field that examines the interdependencies among culture, mind, and the brain. By investigating brain plasticity in varying social and ecological contexts, it seeks to overcome the nature-nurture dichotomy. In the present talk, after a brief overview of the field, I will illustrate its potential by reviewing evidence for cultural variations in brain mechanisms underlying cognition (i.e., holistic attention), emotion (i.e., emotion regulation), and motivation (i.e., self-serving bias). Directions for future research will be discussed.


Mel Goodale and Brian Bushway

Human echolocation: How the blind use tongue-clicks to navigate the world
Mel Goodale (University of Western Ontario, the Brain and Mind Institute) and Brian Bushway (World Access for the Blind)
March 13 • 7:00 pm • Davidson Math and Science 110

"I can hear a building over there" Everybody has heard about echolocation in bats and dolphins.  These creatures emit bursts of sounds and listen to the echoes that bounce back to detect objects in their environment. What is less well known is that people can echolocate, too. In fact, there are blind people who have learned to make clicks with their mouth and tongue - and to use the returning echoes from those clicks to sense their surroundings.  Some of these people are so adept at echolocation that they can use this skill to go mountain biking, play basketball, or navigate through unfamiliar buildings.  In this talk, we will learn about several of these echolocators - some of whom train other blind people to use this amazing skill. Testing in our laboratory has revealed that, by listening to the echoes, blind echolocation experts can sense remarkably small differences in the location of potential obstacles.  They can also perceive the size and shape of objects, and even the material properties of those objects - just by listening to the reflected echoes from mouth clicks. It is clear that echolocation enables blind people to do things that are otherwise thought to be impossible without vision, providing them with a high degree of independence in their daily lives. Using neuroimaging (functional magnetic resonance imaging or fMRI), we have also shown that the echoes activate brain regions in the blind echolocators that would normally support vision in the sighted brain. In contrast, the brain areas that process auditory information are not particularly interested in these faint echoes.  This work is shedding new light on just how plastic the human brain really is.   

About Melvyn Goodale: World's leading visual neuroscientist, Melvyn Goodale, is best known for his research on the human brain as it performs different kinds of visual tasks. Goodale has lead much neuroimaging and psychosocial research that has had an enormous influence in the life sciences and medicine. His "two-visual-systems proposal" is now part of almost every textbook in vision, cognitive neuroscience, and psychology. He is a member of the Royal Society, joining the likes of Sir Isaac Newton, Charles Darwin, Albert Einstein and Stephen Hawking.

About Brian Bushway: Brian is the program manager for World Access for the Blind, a non-profit organization which teaches mobility and sensory awareness orientation. He acts as a mobility coach for the blind and a teacher of sighted mobility instructors on the use of echolocation. He designs and implements perception development plans for each client. When not teaching, Brian offers technical and emotional advice to families. He lost his sight at 14.


 

Peter Tse

Chunking of visual features in space and time: Behavioral and neuronal mechanisms 
Peter Tse (Dartmouth, Psychological and Brain Sciences)
March 10 • 4:00 pm • Davidson Math and Science 104

We can learn arbitrary feature conjunctions when the to-be-combined features are present at the same time (Wang et al., 1994). This learning is underpinned by increased activity in visual cortex (Frank et al., 2013). I will discuss data that suggest that this kind of feature-conjunction perceptual learning requires attention, is not strongly retinotopic, and can even link features that do not appear at the same time.


Ioulia Kovelman

Building a Vision: Shared Multimodal Pediatric fNIRS Brain Imaging Facility at the University of Michigan 
Ioulia Kovelman (University of Michigan, Psychology)
February 18 • 4:00 pm • Mathewson-IGT Knowledge Center 124 (Wells Fargo Auditorium)

Kovelman's research interests are in language and reading development in monolingual and bilingual infants, children, and adults. It includes both typical and atypical language and reading development using a variety of behavioral and brain imaging methods (fMRI, fNIRS).


David Raizen

Using the worm to catch Z's: somnogen discovery in C. elegans 
David Raizen (University of Pennsylvania, Neurology)
February 7 • 11:00 am • Davidson Math and Science 104 

Quiescent behavioral states are universal to the animal world with the most famous and mysterious of these being sleep. Despite the fact that we spend one third of our life sleeping, and despite the fact that all animals appear to sleep, the core function of sleep remains a mystery. In addition, the molecular basis underlying sleep/wake regulation is poorly understood. Raizen uses C. elegans as a model system to address these questions. C. elegans offers many experimental advantages including powerful genetic tools as well as a simple neuroanatomy. Growth of C. elegans from an embryo to an adult is punctuated by four molts, during which the animal secretes a new cuticle and sheds its old one. Prior to each molt the worm has a quiescent behavioral state called lethargus. Lethargus has several similarities to sleep including rapid reversibility to strong stimulation, increased sensory arousal threshold, and homeostasis, which is manifested by an increased depth of sleep following a period of deprivation. Similarity to sleep at the molecular genetic level is demonstrated by the identification of signaling pathways that regulate C. elegans lethargus in the similar fashion to their regulation of sleep in mammals and arthropods. For examples, cAMP signaling promotes wakefulness and epidermal growth factor signaling promotes sleep in C. elegans and other organisms. The Raizen lab has identified new regulators of sleep like behavior in C. elegans and is currently studying how these regulators function to regulate sleep. By studying the purpose and genetic regulation of nematode lethargus, they hope to identify additional novel sleep regulators, and to gain insight into why sleep and sleep-like states had evolved, a central biological mystery.


Fall 2013

Theodore Huppert

Introduction to Functional Near-Infrared Spectroscopy (fNIRS)
Theodore Huppert (University of Pittsburgh, Radiology)
December 10 • 2:30 pm • Ansari Business 107

In this talk, Dr. Huppert will present the background theory behind fNIRS brain imaging. He will also introduce the basic concepts of data collection, analysis and interpretation of fNIRS.

Illuminating the Mind: Applications and Challenges for fNIRS
December 11 • 2:30 pm • Ansari Business 107

Functional near-infrared spectroscopy (fNIRS) is a non-invasive brain imaging technique that uses light to record changes in cerebral blood flow. This technology has several unique advantages including low cost, portability, and versatility which have opened several new areas of brain imaging research. In this talk, Dr. Huppert will present an overview of some of these novel applications for fNIRS technology that are being conducted at the University of Pittsburgh, including brain imaging of balance and mobility disorders, child and infant psychology, and multimodal neuroimaging. He will also discuss some of the unique challenges of using fNIRS in "real-world" brain imaging experiments.


John Rothrock

Understanding Migraine: Genetics, Epigenetics and Receptor Sensitivity
John Rothrock (Renown Health Institute for Neurosciences)
November 5 • 2:30 pm • Center for Molecular Medicine 111

Despite its high prevalence, migraine remains poorly understood by the lay public and health care providers (HCPs) alike. Many migraine patients who seek medical attention are disappointed by the experience, and many HCPs feel at a loss when confronted by treatment-refractory patients. That migraine can be difficult to treat is hardly surprising. This common, easily recognized and clinically stereotyped disorder is polygenetic in origin, and the familiar symptoms of migraine consequently may be generated by a variety of biologic pathways. To complicate matters further, the clinical expression of migraine's genetic predisposition may be influenced by a number of factors, epigenetic and otherwise. Finally, migraine is comorbid with conditions and diseases that may complicate management of the headache disorder; these comorbidities include depression, bipolar disorder, anxiety disorders, sleep disorders and epilepsy. Despite this, a better understanding of migraine's biogenesis has led to the development of new therapies relatively specific to the disorder and unprecedented in their efficacy.


Dragana Rogulja

Cell cycle genes repurposed as sleep factors
Dragana Rogulja (Harvard Medical School, Neurobiology)
October 18 • 11:00 am • Davidson Math and Science 104

A remarkable change occurs in our brains each night, making us lose the essence of who we are for hours at a time: we fall asleep. A process so familiar to us, sleep nevertheless remains among the most mysterious phenomena in biology. The goal of our work is to understand how the brain reversibly switches between waking and sleep states, and why we need to sleep in the first place. To address these questions, Rogulja uses Drosophila melanogaster as a model system, because sleep in the fly is remarkably similar to mammalian sleep. Flies have consolidated periods of activity and sleep; arousal threshold is elevated in sleeping flies; the brain's electrical activity differs between sleeping and awake flies. As in people, both circadian and homeostatic mechanisms provide input into the regulation of fly sleep: flies are normally active during the day and quiescent at night, but if deprived of sleep will show a consequent increase in "rebound" sleep, regardless of the time of day.


Alison Harris

HD-EEG Analysis Workshop
Alison Harris (Claremont McKenna College, Psychology)
October 18 • 10:00 am • Neuroimaging Core, Mack Social Science 412

Event-related brain dynamics of value and decision-making
October 18 • 3:30 pm • Ansari Business 101

From selecting a snack in the supermarket to allocating financial resources, our lives are filled with choices. Emerging research from human neuroimaging suggests that a common neural circuitry underlies such disparate decisions: in particular, the ventromedial prefrontal cortex (vmPFC) has been associated with subjective value across a wide variety of tasks and goods. However, due to the inherent limitations of hemodynamic measures, comparatively little is known about when and how the vmPFC computes value signals across the time course of decision. Harris will discuss research exploiting the high temporal resolution and whole-brain coverage of event-related potentials (ERP) in order to examine the dynamic construction of value signals. Combined with advanced statistical and source reconstruction techniques, this novel approach reveals that neural activity correlated with subjective preference emerges approximately 400 ms after stimulus onset, localized to regions including vmPFC. Reflecting the integration of sensory attribute information, activity in this time window is also modulated by top-down goals (e.g., weight loss) through connections with dorsolateral prefrontal cortex. Together these results highlight the utility of ERP in understanding the cortical dynamics of decision-making, providing a fuller picture of how neural signals of subjective value emerge in the time leading up to choice.