Touch disambiguates rivalrous perception at early stages of visual analysis,Curr Biol, 4 (20), R143-144. 

Binocular rivalry is a powerful tool to study human consciousness: two equally salient stimuli are imaged on the retinae, but at any given instant only one is consciously perceived, the other suppressed.The suppression takes place early, probably in V1. However, a trace of the suppressed signal has been detected along the dorsal visual pathway (BOLD responses) and demonstrated with psychophysical experiments. The suppressed image of a rotating sphere during rivalry is restored to consciousness when the observer actively controls the rotation and a similar effect on the suppressed signal has been shown for motion perception and reflexive eye movements (see Supplemental References). Here, we asked whether cross-modal sensory signals could selectively interact with rivalrous visual signals that are analyzed at a very early stage, probably V1. An auditory stimulus, when attended, can influence binocular rivalry, extending dominance times for a congruent visual stimulus. Tactile information can  also disambiguate unstable visual motion and can fuse with vision to improve discrimination (e.g. slant). Our results indicate that a haptic oriented stimulus can disambiguate visual perception during binocular rivalry of gratings of orthogonal orientation, not only by prolonging dominance but also by curtailing suppression of the visual stimulus of matched orientation. The effect is selective for the spatial frequency of the stimuli, suggesting that haptic signals interact with early visual representations to enhance access to conscious perception.

Saccades compress space, time and number,Trends Cogn Sci, 12 (14), 528-533. 

It has been suggested that space, time and number are represented on a common subjective scale. Saccadic eye movements provide a fascinating test. Saccades compress the perceived magnitude of spatial separations and temporal intervals to approximately half of their true value. The question arises as to whether saccades also compress number. They do, and compression follows a very similar time course for all three attributes: it is maximal at saccadic onset and decreases to veridicality within a window of approximately 50ms. These results reinforce the suggestion of a common perceptual metric, which is probably mediated by the intraparietal cortex; they further suggest that before each saccade the common metric for all three is reset, possibly to pave the way for a fresh analysis of the post-saccadic situation.

Temporal auditory capture does not affect the time course of saccadic mislocalization of visual stimuli,J Vis, 2 (10), 7 1-13. 

Irrelevant sounds can “capture” visual stimuli to change their apparent timing, a phenomenon sometimes termed “temporal ventriloquism”. Here we ask whether this auditory capture can alter the time course of spatial mislocalization of visual stimuli during saccades. We first show that during saccades, sounds affect the apparent timing of visual flashes, even more strongly than during fixation. However, this capture does not affect the dynamics of perisaccadic visual distortions. Sounds presented 50 ms before or after a visual bar (that change perceived timing of the bars by more than 40 ms) had no measurable effect on the time courses of spatial mislocalization of the bars, in four subjects. Control studies showed that with barely visible, low-contrast stimuli, leading, but not trailing, sounds can have a small effect on mislocalization, most likely attributable to attentional effects rather than auditory capture. These findings support previous studies showing that integration of multisensory information occurs at a relatively late stage of sensory processing, after visual representations have undergone the distortions induced by saccades.

Spatial maps for time and motion,Exp Brain Res, 2 (206), 121-128.

In this article, we review recent research studying the mechanisms for transforming coordinate systems to encode space, time and motion. A range of studies using functional imaging and psychophysical techniques reveals mechanisms in the human brain for encoding information in external rather than retinal coordinates. This reinforces the idea of a tight relationship between space and time, in the parietal cortex of primates.

Brain development: critical periods for cross-sensory plasticity,Curr Biol, 21 (20), R934-936.

Recent work has shown that visual deprivation of humans during a critical period leads to motion area MT+ responding to auditory motion. This cross-sensory plasticity, an important form of brain reorganization, may be mediated by top-down brain circuits from pre-frontal cortex.

Compression of time during smooth pursuit eye movements,Vision Res, 24 (50), 2702-2713.

Humans have a clear sense for the passage of time, but while implicit motor timing is quite accurate, explicit timing is prone to distortions particularly during action (Wenke & Haggard, 2009) and saccadic eye movements (Morrone, Ross, & Burr, 2005). Here, we investigated whether perceived duration is also affected by the execution of smooth pursuit eye movements, showing a compression of apparent duration similar to that observed during saccades. To this end, we presented two brief bars that marked intervals between 100 and 300 ms and asked subjects to judge their duration during fixation and pursuit. We found a compression of perceived duration for bars modulated in luminance contrast of about 32% and for bars modulated in chromatic contrast of 14% during pursuit compared to fixation. Interestingly, Weber ratios were similar for fixation and pursuit, if they are expressed as ratio between JND and perceived duration. This compression was constant for pursuit speeds from 7 to 14 deg/s and did not occur for intervals marked by auditory events. These results argue for a modality-specific component in the processing of temporal information.

Vision: keeping the world still when the eyes move,Curr Biol, 10 (20), R442-444. 

A long-standing problem for visual science is how the world remains so apparently stable in the face of continual rapid eye movements. New experimental evidence, and computational models are helping to solve this mystery.

Visual information gleaned by observing grasping movement in allocentric and egocentric perspectives,Proc Biol Sci, 1715 (278), 2142-2149.

One of the major functions of vision is to allow for an efficient and active interaction with the environment. In this study, we investigate the capacity of human observers to extract visual information from observation of their own actions, and those of others, from different viewpoints. Subjects discriminated the size of objects by observing a point-light movie of a hand reaching for an invisible object. We recorded real reach-and-grasp actions in three-dimensional space towards objects of different shape and size, to produce two-dimensional ‘point-light display’ movies, which were used to measure size discrimination for reach-and-grasp motion sequences, release-and-withdraw sequences and still frames, all in egocentric and allocentric perspectives. Visual size discrimination from action was significantly better in egocentric than in allocentric view, but only for reach-and-grasp motion sequences: release-and-withdraw sequences or still frames derived no advantage from egocentric viewing. The results suggest that the system may have access to an internal model of action that contributes to calibrate visual sense of size for an accurate grasp.