Combining visual and auditory information,Prog Brain Res, (155), 243-258.

Robust perception requires that information from by our five different senses be combined at some central level to produce a single unified percept of the world. Recent theory and evidence from many laboratories suggests that the combination does not occur in a rigid, hardwired fashion, but follows flexible situation-dependent rules that allow information to be combined with maximal efficiency. In this review we discuss recent evidence from our laboratories investigating how information from auditory and visual modalities is combined. The results support the notion of Bayesian combination. We also examine temporal alignment of auditory and visual signals, and show that perceived simultaneity does not depend solely on neural latencies, but involves active processes that compensate, for example, for the physical delay introduced by the relatively slow speed of sound. Finally, we go on to show that although visual and auditory information is combined to maximize efficiency, attentional resources for the two modalities are largely independent.

Contour interactions between pairs of Gabors engaged in binocular rivalry reveal a map of the association field,Vision Res, 8-9 (46), 1473-1487.

A psychophysical study was conducted to investigate contour interactions (the ‘association field’). Two Gabor patches were presented to one eye, with random-dot patches in corresponding locations of the other eye so as to produce binocular rivalry. Perceptual alternations of the two rivalry processes were monitored continuously by observers and the two time series were cross-correlated. The Gabors were oriented collinearly, obliquely, or orthogonally, and spatial separation was varied. A parallel condition was also included. Correlation between the rivalry processes strongly depended on separation and relative orientation. Correlations between adjacent collinear Gabors was near-perfect and reduced with spatial separation and as relative orientation departed from collinear. Importantly, variations in cross-correlation did not alter the rivalry processes (average dominance duration, and therefore alternation rate, was constant across conditions). Instead, synchronisation of rivalry oscillations accounts for the correlation variations: rivalry alternations were highly synchronised when contour interactions were strong and were poorly synchronised when contour interactions were weak. The level of synchrony between these two stochastic processes, in depending on separation and relative orientation, effectively reveals a map of the association field. These association fields are not greatly affected by contrast, and can be demonstrated between contours that are presented to separate hemispheres.

Development of saccadic suppression in children,J Neurophysiol, 3 (96), 1011-1017. 

We measured saccadic suppression in adolescent children and young adults using spatially curtailed low spatial frequency stimuli. For both groups, sensitivity for color-modulated stimuli was unchanged during saccades. Sensitivity for luminance-modulated stimuli was greatly reduced during saccades in both groups but far more for adolescents than for young adults. Adults’ suppression was on average a factor of about 3, whereas that for the adolescent group was closer to a factor of 10. The specificity of the suppression to luminance-modulated stimuli excludes generic explanations such as task difficulty and attention. We suggest that the enhanced suppression in adolescents results from the immaturity of the ocular-motor system at that age.

Perception: transient disruptions to neural space-time,Curr Biol, 19 (16), R847-849.

How vision operates efficiently in the face of continuous shifts of gaze remains poorly understood. Recent studies show that saccades cause dramatic, but transient, changes in the spatial and also temporal tuning of cells in many visual areas, which may underly the perceptual compression of space and time, and serve to counteract the effects of the saccades and maintain visual stability.

Perceptual synchrony of audiovisual streams for natural and artificial motion sequences,J Vis, 3 (6), 260-268.

We investigated the conditions necessary for perceptual simultaneity of visual and auditory stimuli under natural conditions: video sequences of conga drumming at various rhythms. Under most conditions, the auditory stream needs to be delayed for sight and sound to be perceived simultaneously. The size of delay for maximum perceived simultaneity varied inversely with drumming tempo, from about 100 ms at 1 Hz to 30 ms at 4 Hz. Random drumming motion produced similar results, with higher random tempos requiring less delay. Video sequences of disk stimuli moving along a motion profile matched to the drummer produced near-identical results. When the disks oscillated at constant speed rather than following “biological” speed variations, the delays necessary for perceptual synchrony were systematically less. The results are discussed in terms of real-world constraints for perceptual synchrony and possible neural mechanisms.

Resolution for spatial segregation and spatial localization by motion signals,Vision Res, 6-7 (46), 932-939.

We investigated two types of spatial resolution for perceiving motion-defined contours: grating acuity, the capacity to discriminate alternating stripes of opposed motion from transparent bi-directional motion; and alignment acuity, the capacity to localize the position of motion-defined edges with respect to stationary markers. For both tasks the stimuli were random noise patterns, low-pass filtered in the spatial dimension parallel to the motion. Both grating and alignment resolution varied systematically with spatial frequency cutoff and speed. Best performance for grating resolution was about 10 c/deg (for unfiltered patterns moving at 1-4 deg/s), corresponding to a stripe resolution of about 3′. Grating resolution corresponds well to estimates of smallest receptive field size of motion units under these conditions, suggesting that opposing signals from units with small receptive fields (probably located in V1) are contrasted efficiently to define edges. Alignment resolution was about 2′ at best, under similar conditions. Whereas alignment judgment based on luminance-defined edges is typically 3-10 times better than resolution, alignment based on motion-defined edges is only 1.1-1.5 times better, suggesting motion contours are less effectively encoded than luminance contours.

Separate attentional resources for vision and audition,Proc Biol Sci, 1592 (273), 1339-1345. 

Current models of attention, typically claim that vision and audition are limited by a common attentional resource which means that visual performance should be adversely affected by a concurrent auditory task and vice versa. Here, we test this implication by measuring auditory (pitch) and visual (contrast) thresholds in conjunction with cross-modal secondary tasks and find that no such interference occurs. Visual contrast discrimination thresholds were unaffected by a concurrent chord or pitch discrimination, and pitch-discrimination thresholds were virtually unaffected by a concurrent visual search or contrast discrimination task. However, if the dual tasks were presented within the same modality, thresholds were raised by a factor of between two (for visual discrimination) and four (for auditory discrimination). These results suggest that at least for low-level tasks such as discriminations of pitch and contrast, each sensory modality is under separate attentional control, rather than being limited by a supramodal attentional resource. This has implications for current theories of attention as well as for the use of multi-sensory media for efficient informational transmission.

The effects of opposite-polarity dipoles on the detection of Glass patterns,Vision Res, 6-7 (46), 1139-1144.

Glass patterns–randomly positioned coherently orientated dipoles–create a strong sensation of oriented spatial structure. On the other hand, coherently oriented dipoles comprising dots of opposite polarity (“anti-Glass” patterns) have no distinct spatial structure and are very hard to distinguish from random noise. Although anti-Glass patterns have no obvious spatial structure themselves, their presence can destroy the structure created by Glass patterns. We measured the strength of this effect for both static and dynamic Glass patterns, and showed that anti-Glass patterns can raise thresholds for Glass patterns by a factor of 2-4, increasing with density. The dependence on density suggests that the interactions occur at a local level. When the Glass and anti-Glass dipoles were confined to alternate strips (in translational and circular Glass patterns), the detrimental effect occurred for stripe widths less than about 1.5 degrees, but had little effect for larger stripe widths, reinforcing the suggestion that the interaction occurred over a limited spatial extent. The extent of spatial interaction was much less than that for spatial summation of these patterns, at least 30 degrees under matched experimental conditions. The results suggest two stages of analysis for Glass patterns, an early stage of limited spatial extent where orientation is extracted, and a later stage that sums these orientation signals.

Time perception: space-time in the brain,Curr Biol, 5 (16), R171-173.

Visual clutter causes high-magnitude errors,PLoS Biol, 3 (4), e56.

Perceptual decisions are often made in cluttered environments, where a target may be confounded with competing “distractor” stimuli. Although many studies and theoretical treatments have highlighted the effect of distractors on performance, it remains unclear how they affect the quality of perceptual decisions. Here we show that perceptual clutter leads not only to an increase in judgment errors, but also to an increase in perceived signal strength and decision confidence on erroneous trials. Observers reported simultaneously the direction and magnitude of the tilt of a target grating presented either alone, or together with vertical distractor stimuli. When presented in isolation, observers perceived isolated targets as only slightly tilted on error trials, and had little confidence in their decision. When the target was embedded in distractors, however, they perceived it to be strongly tilted on error trials, and had high confidence of their (erroneous) decisions. The results are well explained by assuming that the observers’ internal representation of stimulus orientation arises from a nonlinear combination of the outputs of independent noise-perturbed front-end detectors. The implication that erroneous perceptual decisions in cluttered environments are made with high confidence has many potential practical consequences, and may be extendable to decision-making in general.