Fusion of visual and auditory stimuli during saccades: a Bayesian explanation for perisaccadic distortions,J Neurosci, 32 (27), 8525-8532.

Brief stimuli presented near the onset of saccades are grossly mislocalized in space. In this study, we investigated whether the Bayesian hypothesis of optimal sensory fusion could account for the mislocalization. We required subjects to localize visual, auditory, and audiovisual stimuli at the time of saccades (compared with an earlier presented target). During fixation, vision dominates and spatially “captures” the auditory stimulus (the ventriloquist effect). But for perisaccadic presentations, auditory localization becomes more important, so the mislocalized visual stimulus is seen closer to its veridical position. The precision of the bimodal localization (as measured by localization thresholds or just-noticeable difference) was better than either the visual or acoustic stimulus presented in isolation. Both the perceived position of the bimodal stimuli and the improved precision were well predicted by assuming statistically optimal Bayesian-like combination of visual and auditory signals. Furthermore, the time course of localization was well predicted by the Bayesian approach. We present a detailed model that simulates the time-course data, assuming that perceived position is given by the sum of retinal position and a sluggish noisy eye-position signal, obtained by integrating optimally the output of two populations of neural activity: one centered at the current point of gaze, the other centered at the future point of gaze.

Abnormal adaptive face-coding mechanisms in children with autism spectrum disorder,Curr Biol, 17 (17), 1508-1512.

In low-level vision, exquisite sensitivity to variation in luminance is achieved by adaptive mechanisms that adjust neural sensitivity to the prevailing luminance level. In high-level vision, adaptive mechanisms contribute to our remarkable ability to distinguish thousands of similar faces [1]. A clear example of this sort of adaptive coding is the face-identity aftereffect [2, 3, 4, 5], in which adaptation to a particular face biases perception toward the opposite identity. Here we investigated face adaptation in children with autism spectrum disorder (ASD) by asking them to discriminate between two face identities, with and without prior adaptation to opposite-identity faces. The ASD group discriminated the identities with the same precision as did the age- and ability-matched control group, showing that face identification per se was unimpaired. However, children with ASD showed significantly less adaptation than did their typical peers, with the amount of adaptation correlating significantly with current symptomatology, and face aftereffects of children with elevated symptoms only one third those of controls. These results show that although children with ASD can learn a simple discrimination between two identities, adaptive face-coding mechanisms are severely compromised, offering a new explanation for previously reported face-perception difficulties [6, 7, 8] and possibly for some of the core social deficits in ASD [9, 10].

Neural mechanisms for timing visual events are spatially selective in real-world coordinates,Nat Neurosci, 4 (10), 423-425.

It is generally assumed that perceptual events are timed by a centralized supramodal clock. This study challenges this notion in humans by providing clear evidence that visual events of subsecond duration are timed by visual neural mechanisms with spatially circumscribed receptive fields, localized in real-world, rather than retinal, coordinates.

Spatiotopic selectivity of BOLD responses to visual motion in human area MT,Nat Neurosci, 2 (10), 249-255.

Many neurons in the monkey visual extrastriate cortex have receptive fields that are affected by gaze direction. In humans, psychophysical studies suggest that motion signals may be encoded in a spatiotopic fashion. Here we use functional magnetic resonance imaging to study spatial selectivity in the human middle temporal cortex (area MT or V5), an area that is clearly implicated in motion perception. The results show that the response of MT is modulated by gaze direction, generating a spatial selectivity based on screen rather than retinal coordinates. This area could be the neurophysiological substrate of the spatiotopic representation of motion signals.

The contribution of prefrontal cortex to global perception,Exp Brain Res, 3 (181), 427-434.

Recent research suggests a role of top-down modulatory signals on perceptual processing, particularly for the integration of local elementary information to form a global holistic percept. In this study we investigated whether prefrontal cortex may be instrumental in this top-down modulation in humans. We measured detection thresholds for perceiving a circle defined by a closed chain of grating patches in 6 patients with prefrontal lesions, 4 control patients with temporal lesions and 17 healthy control subjects. Performance of patients with prefrontal lesions was worse than that of patients with temporal lesions and normal controls when the patterns were sparse, requiring integration across relatively extensive regions of space, but similar to the control groups for denser patterns. The results clearly implicate the prefrontal cortex in the process of integrating elementary features into a holistic global percept, when the elements do not form a “pop-out” display.

The effect of optokinetic nystagmus on the perceived position of briefly flashed targets,Vision Res, 6 (47), 861-868.

Stimuli flashed briefly around the time of an impending saccade are mislocalized in the direction of the saccade and also compressed towards the saccadic target. Similarly, targets flashed during pursuit eye movements are mislocalized in the direction of pursuit. Here, we investigate the effects of optokinetic nystagmus (OKN) on visual localization. Subjects passively viewed a wide-field drifting grating that elicited strong OKN, comprising the characteristic slow-phase tracking movement interspersed with corrected “saccade-like” fast-phase movements. Subjects reported the apparent position of salient bars flashed briefly at various positions on the screen. In general, bars were misperceived in the direction of the slow-phase tracking movement. Bars flashed around the onset of the fast-phase movements were subject to much less mislocalization, pointing to a competing shift in the direction of the fast-phase, as occurs with saccades. However, as distinct from saccades, there was no evidence for spatial compression around the time of the corrective fast-phase OKN. The results suggest that OKN cause perceptual mislocalizations similar to those of smooth pursuit and saccades, but there are some differences in the nature of the mislocalizations, pointing to different perceptual mechanisms associated with the different types of eye movements.

The role of perceptual learning on modality-specific visual attentional effects,Vision Res, 1 (47), 60-70.

Morrone et al. [Morrone, M. C., Denti, V., & Spinelli, D. (2002). Color and luminance contrasts attract independent attention. Current Biology, 12, 1134-1137] reported that the detrimental effect on contrast discrimination thresholds of performing a concomitant task is modality specific: performing a secondary luminance task has no effect on colour contrast thresholds, and vice versa. Here we confirm this result with a novel task involving learning of spatial position, and go on to show that it is not specific to the cardinal colour axes: secondary tasks with red-green stimuli impede performance on a blue-yellow task and vice versa. We further show that the attentional effect can be abolished with continued training over 2-4 training days (2-20 training sessions), and that the effect of learning is transferable to new target positions. Given the finding of transference, we discuss the possibility that V4 is a site of plasticity for both stimulus types, and that the separation is due to a luminance-colour separation within this cortical area.