Saccades compress space, time and number,Trends Cogn Sci, 12 (14), 528-533. 

It has been suggested that space, time and number are represented on a common subjective scale. Saccadic eye movements provide a fascinating test. Saccades compress the perceived magnitude of spatial separations and temporal intervals to approximately half of their true value. The question arises as to whether saccades also compress number. They do, and compression follows a very similar time course for all three attributes: it is maximal at saccadic onset and decreases to veridicality within a window of approximately 50ms. These results reinforce the suggestion of a common perceptual metric, which is probably mediated by the intraparietal cortex; they further suggest that before each saccade the common metric for all three is reset, possibly to pave the way for a fresh analysis of the post-saccadic situation.

Temporal auditory capture does not affect the time course of saccadic mislocalization of visual stimuli,J Vis, 2 (10), 7 1-13. 

Irrelevant sounds can “capture” visual stimuli to change their apparent timing, a phenomenon sometimes termed “temporal ventriloquism”. Here we ask whether this auditory capture can alter the time course of spatial mislocalization of visual stimuli during saccades. We first show that during saccades, sounds affect the apparent timing of visual flashes, even more strongly than during fixation. However, this capture does not affect the dynamics of perisaccadic visual distortions. Sounds presented 50 ms before or after a visual bar (that change perceived timing of the bars by more than 40 ms) had no measurable effect on the time courses of spatial mislocalization of the bars, in four subjects. Control studies showed that with barely visible, low-contrast stimuli, leading, but not trailing, sounds can have a small effect on mislocalization, most likely attributable to attentional effects rather than auditory capture. These findings support previous studies showing that integration of multisensory information occurs at a relatively late stage of sensory processing, after visual representations have undergone the distortions induced by saccades.

Spatial maps for time and motion,Exp Brain Res, 2 (206), 121-128.

In this article, we review recent research studying the mechanisms for transforming coordinate systems to encode space, time and motion. A range of studies using functional imaging and psychophysical techniques reveals mechanisms in the human brain for encoding information in external rather than retinal coordinates. This reinforces the idea of a tight relationship between space and time, in the parietal cortex of primates.

Compression of time during smooth pursuit eye movements,Vision Res, 24 (50), 2702-2713.

Humans have a clear sense for the passage of time, but while implicit motor timing is quite accurate, explicit timing is prone to distortions particularly during action (Wenke & Haggard, 2009) and saccadic eye movements (Morrone, Ross, & Burr, 2005). Here, we investigated whether perceived duration is also affected by the execution of smooth pursuit eye movements, showing a compression of apparent duration similar to that observed during saccades. To this end, we presented two brief bars that marked intervals between 100 and 300 ms and asked subjects to judge their duration during fixation and pursuit. We found a compression of perceived duration for bars modulated in luminance contrast of about 32% and for bars modulated in chromatic contrast of 14% during pursuit compared to fixation. Interestingly, Weber ratios were similar for fixation and pursuit, if they are expressed as ratio between JND and perceived duration. This compression was constant for pursuit speeds from 7 to 14 deg/s and did not occur for intervals marked by auditory events. These results argue for a modality-specific component in the processing of temporal information.

Poor haptic orientation discrimination in nonsighted children may reflect disruption of cross-sensory calibration,Curr Biol, 3 (20), 223-225.

A long-standing question, going back at least 300 years to Berkeley’s famous essay, is how sensory systems become calibrated with physical reality. We recently showed [1] that children younger than 8-10 years do not integrate visual and haptic information optimally, but that one or the other sense prevails: touch for size and vision for orientation discrimination. The sensory dominance may reflect crossmodal calibration of vision and touch, where the more accurate sense calibrates the other. This hypothesis leads to a clear prediction: that lack of clear vision at an early age should affect calibration of haptic orientation discrimination. We therefore measured size and orientation haptic discrimination thresholds in 17 congenitally visually impaired children (aged 5-19). Haptic orientation thresholds were greatly impaired compared with age-matched controls, whereas haptic size thresholds were at least as good, and often better. One child with a late-acquired visual impairment stood out with excellent orientation discrimination. The results provide strong support for our crossmodal calibration hypothesis.

Subitizing but not estimation of numerosity requires attentional resources,J Vis, 6 (10), 20.

The numerosity of small numbers of objects, up to about four, can be rapidly appraised without error, a phenomenon known as subitizing. Larger numbers can either be counted, accurately but slowly, or estimated, rapidly but with errors. There has been some debate as to whether subitizing uses the same or different mechanisms than those of higher numerical ranges and whether it requires attentional resources. We measure subjects’ accuracy and precision in making rapid judgments of numerosity for target numbers spanning the subitizing and estimation ranges while manipulating the attentional load, both with a spatial dual task and the “attentional blink” dual-task paradigm. The results of both attentional manipulations were similar. In the high-load attentional condition, Weber fractions were similar in the subitizing (2-4) and estimation (5-7) ranges (10-15%). In the low-load and single-task condition, Weber fractions substantially improved in the subitizing range, becoming nearly error-free, while the estimation range was relatively unaffected. The results show that the mechanisms operating over the subitizing and estimation ranges are not identical. We suggest that pre-attentive estimation mechanisms works at all ranges, but in the subitizing range, attentive mechanisms also come into play.

Vision senses number directly,J Vis, 2 (10), 10 11-18.

We have recently suggested that numerosity is a primary sensory attribute, and shown that it is strongly susceptible to adaptation. Here we use the Method of Single Stimuli to show that observers can extract a running average of numerosity of a succession of stimuli to use as a standard of comparison for subsequent stimuli. On separate sessions observers judged whether the perceived numerosity or density of a particular trial was greater or less than the average of previous stimuli. Thresholds were as precise for this task as for explicit comparisons of test with standard stimuli. Importantly, we found no evidence that numerosity judgments are mediated by density. Under all conditions, judgements of numerosity were as precise as those of density. Thresholds in intermingled conditions, where numerosity varied unpredictably with density, were as precise as the blocked thresholds. Judgments in constant-density conditions were more precise thresholds than those in variable-density conditions, and numerosity judgements in conditions of constant-numerosity showed no tendency to follow density. We further report the novel finding that perceived numerosity increases with decreasing luminance, whereas texture density does not, further evidence for independent processing of the two attributes. All these measurements suggest that numerosity judgments can be, and are, made independently of judgments of the density of texture.

Vision: keeping the world still when the eyes move,Curr Biol, 10 (20), R442-444. 

A long-standing problem for visual science is how the world remains so apparently stable in the face of continual rapid eye movements. New experimental evidence, and computational models are helping to solve this mystery.