Shifts in spatial attention affect the perceived duration of events,J Vis, 1 (9), 9 1-13.

We investigated the relationship between attention and perceived duration of visual events with a double-task paradigm. The primary task was to discriminate the size change of a 2 degree circle presented 10 degrees left, right, above, or below fixation; the secondary task was to judge the temporal separation (from 133 ms to 633 ms) of two equiluminant horizontal bars (10 deg x 2 deg) briefly flashed 12 degrees above or below fixation. The stimulus onset asynchrony (SOA) between primary and secondary task ranged from -1300 ms to +1000 ms. Temporal intervals in proximity of the onset of the primary task stimuli were perceived strongly compressed by up to 40%. The effect was proportional to the size of the interval with a maximum effect at 100 ms SOA. Control experiments show that neither primary-task difficulty, nor the type of primary task discrimination (form or motion, or equiluminant or luminance contrast) nor spatial congruence between primary and secondary task alter the effect. Interestingly, the compression occurred only when the intervals are marked by bars presented in separated spatial locations: when the interval is marked by two bars flashed in the same spatial position no temporal distortion was found. These data indicate that attention can alter perceived duration when the brain has to compare the passage of time at two different spatial positions, corroborating earlier findings that mechanisms of time perception may monitor separately the various spatial locations possibly at high level of analysis.

Temporal mechanisms of multimodal binding,Proc Biol Sci, 1663 (276), 1761-1769.

The simultaneity of signals from different senses-such as vision and audition-is a useful cue for determining whether those signals arose from one environmental source or from more than one. To understand better the sensory mechanisms for assessing simultaneity, we measured the discrimination thresholds for time intervals marked by auditory, visual or auditory-visual stimuli, as a function of the base interval. For all conditions, both unimodal and cross-modal, the thresholds followed a characteristic ‘dipper function’ in which the lowest thresholds occurred when discriminating against a non-zero interval. The base interval yielding the lowest threshold was roughly equal to the threshold for discriminating asynchronous from synchronous presentations. Those lowest thresholds occurred at approximately 5, 15 and 75 ms for auditory, visual and auditory-visual stimuli, respectively. Thus, the mechanisms mediating performance with cross-modal stimuli are considerably slower than the mechanisms mediating performance within a particular sense. We developed a simple model with temporal filters of different time constants and showed that the model produces discrimination functions similar to the ones we observed in humans. Both for processing within a single sense, and for processing across senses, temporal perception is affected by the properties of temporal filters, the outputs of which are used to estimate time offsets, correlations between signals, and more.

Auditory dominance over vision in the perception of interval duration,Exp Brain Res, 1 (198), 49-57.

The “ventriloquist effect” refers to the fact that vision usually dominates hearing in spatial localization, and this has been shown to be consistent with optimal integration of visual and auditory signals (Alais and Burr in Curr Biol 14(3):257-262, 2004). For temporal localization, however, auditory stimuli often “capture” visual stimuli, in what has become known as “temporal ventriloquism”. We examined this quantitatively using a bisection task, confirming that sound does tend to dominate the perceived timing of audio-visual stimuli. The dominance was predicted qualitatively by considering the better temporal localization of audition, but the quantitative fit was less than perfect, with more weight being given to audition than predicted from thresholds. As predicted by optimal cue combination, the temporal localization of audio-visual stimuli was better than for either sense alone.

Cueing the interpretation of a Necker Cube: a way to inspect fundamental cognitive processes,Cogn Process, (10 Suppl 1), S95-99.

The term perceptual bistability refers to all those conditions in which an observer looks at an ambiguous stimulus that can have two or more distinct but equally reliable interpretations. In this work, we investigate perception of Necker Cube in which bistability consists of the possibility to interpret the cube depth in two different ways. We manipulated the cube ambiguity by darkening one of the cube faces (cue) to provide a clear cube interpretation due to the occlusion depth index. When the position of the cue is stationary the cube perceived perspective is steady and driven by the cue position. However, when we alternated in time the cue position (i.e. we changed the position of the darkened cube face) two different perceptual phenomena occurred: for low frequencies the cube perspective alternated in line with the position of the cue; however for high frequencies the cue was no longer able to bias the perception but it appears as a floating feature traveling across the solid with the cube whole perspective that returns to be bistable as in the conventional, bias-free, case.

Meaningful auditory information enhances perception of visual biological motion,J Vis, 4 (9), 25 21-27.

Robust perception requires efficient integration of information from our various senses. Much recent electrophysiology points to neural areas responsive to multisensory stimulation, particularly audiovisual stimulation. However, psychophysical evidence for functional integration of audiovisual motion has been ambiguous. In this study we measure perception of an audiovisual form of biological motion, tap dancing. The results show that the audio tap information interacts with visual motion information, but only when in synchrony, demonstrating a functional combination of audiovisual information in a natural task. The advantage of multimodal combination was better than the optimal maximum likelihood prediction.

Mislocalization of flashed and stationary visual stimuli after adaptation of reactive and scanning saccades,J Neurosci, 35 (29), 11055-11064.

When we look around and register the location of visual objects, our oculomotor system continuously prepares targets for saccadic eye movements. The preparation of saccade targets may be directly involved in the perception of object location because modification of saccade amplitude by saccade adaptation leads to a distortion of the visual localization of briefly flashed spatial probes. Here, we investigated effects of adaptation on the localization of continuously visible objects. We compared adaptation-induced mislocalization of probes that were present for 20 ms during the saccade preparation period and of probes that were present for >1 s before saccade initiation. We studied the mislocalization of these probes for two different saccade types, reactive saccades to a suddenly appearing target and scanning saccades in the self-paced viewing of a stationary scene. Adaptation of reactive saccades induced mislocalization of flashed probes. Adaptation of scanning saccades induced in addition also mislocalization of stationary objects. The mislocalization occurred in the absence of visual landmarks and must therefore originate from the change in saccade motor parameters. After adaptation of one type of saccade, the saccade amplitude change and the mislocalization transferred only weakly to the other saccade type. Mislocalization of flashed and stationary probes thus followed the selectivity of saccade adaptation. Since the generation and adaptation of reactive and scanning saccades are known to involve partially different brain mechanisms, our results suggest that visual localization of objects in space is linked to saccade targeting at multiple sites in the brain.

Motion perception in preterm children: role of prematurity and brain damage,Neuroreport, 15 (20), 1339-1343.

We tested 26 school-aged children born preterm at a gestational age below 34 weeks, 13 with and 13 without periventricular brain damage, with four different visual stimuli assessing perception of pure global motion (optic flow), with some form information (segregated translational motion) and form-defined static stimuli. Results were compared with a group of age-matched healthy term-born controls. Preterm children with brain damage showed significantly lower sensitivities relative to full-term controls in all four tests, whereas those without brain damage were significantly worse than controls only for the pure motion stimuli. Furthermore, when form information was embedded in the stimulus, preterm children with brain lesions scored significantly worse than those without lesions. These results suggest that in preterm children dorsal stream-related functions are impaired irrespective of the presence of brain damage, whereas deficits of the ventral stream are more related to the presence of periventricular brain damage.

Neural correlates of texture and contour integration in children with autism spectrum disorders,Vision Res, 16 (49), 2140-2150.

In this study, we have used an electrophysiological paradigm to investigate the neural correlates of the visual integration of local signals across space to generate global percepts in a group of low functioning autistic kids. We have analyzed the amplitude of key harmonics of the Visual Evoked Potentials (VEPs) recorded while participants observed orientation-based texture and contour stimuli, forming coherent global patterns, alternating with visual patterns in which the same number of local elements were randomly oriented in order to loose any globally organized feature. Comparing the results of the clinical sample with those obtained in an age-matched control group, we have observed that in the texture conditions the 1st and 3rd harmonics, containing signature of global form processing (Norcia, Pei, Bonneh, Hou, Sampath, & Pettet, 2005), were present in the control group, while in the experimental group only the 1st harmonic was present. In the Contour condition the 1st harmonic was not present for both groups while the 3rd harmonic was significantly present in the control group but absent in the group with autism. Moreover, the amount of organization required to elicit significant 1st harmonic response in the texture condition was higher in the clinical group. The present results bring additional support to the idea that texture and contour processing are supported by independent mechanisms in normal vision. Autistic vision would thus be characterized by a preserved, perhaps weaker texture mechanism, possibly mediated by feedback interactions between visual areas, and by a disfunction of the mechanism supporting contour processing, possibly mediated by long-range intra-cortical connections. Within this framework, the residual ability to detect contours shown in psychophysical studies could be due to the contribution of the texture mechanism to contour processing.

Pooling and segmenting motion signals,Vision Res, 10 (49), 1065-1072.

Humans are extremely sensitive to visual motion, largely because local motion signals can be integrated over a large spatial region. On the other hand, summation is often not advantageous, for example when segmenting a moving stimulus against a stationary or oppositely moving background. In this study we show that the spatial extent of motion integration is not compulsory, but is subject to voluntary attentional control. Measurements of motion coherence sensitivity with summation and search paradigms showed that human observers can combine motion signals from cued regions or patches in an optimal manner, even when the regions are quite distinct and remote from each other. Further measurements of contrast sensitivity reinforce previous studies showing that motion integration is preceded by a local analysis akin to contrast thresholding (or intrinsic uncertainty). The results were well modelled by two standard signal-detection-theory models.

Search superiority in autism within, but not outside the crowding regime,Vision Res, 16 (49), 2151-2156.

Visual cognition of observers with autism spectrum disorder (ASD) seems to show an unbalance between the complementary functions of integration and segregation. This study uses visual search and crowding paradigms to probe the relative ability of children with autism, compared to normal developments children, to extract individual targets from cluttered backgrounds both within and outside the crowding regime. The data show that standard search follows the same pattern in the ASD and control groups with a strong effect of the set size that is substantially weakened by cueing the target location with a synchronous spatial cue. On the other hand, the crowding effect of eight flankers surrounding a small peripheral target is virtually absent in the clinical sample, indicating a superior ability to segregate cluttered visual items. This data, along with evidence of an impairment to the neural system for binding contours in ASD, bring additional support to the general idea of a shift of the trade-off between integration and segregation toward the latter. More specifically, they show that when discriminability is balanced across conditions, an advantage in odd-man out tasks is evident in ASD observers only within the crowding regime, when binding mechanism might get compulsorily triggered in normal observers.