Motion psychophysics: 1985-2010,Vision Res.

This review traces progress made in the field of visual motion research from 1985 through to 2010. While it is certainly not exhaustive, it attempts to cover most of the major achievements during that period, and speculate on where the field is heading.

Reward sharpens orientation coding independently of attention,Front Neurosci, (5), 13.

It has long been known that rewarding improves performance. However it is unclear whether this is due to high level modulations in the output modules of associated neural systems or due to low level mechanisms favoring more “generous” inputs? Some recent studies suggest that primary sensory areas, including V1 and A1, may form part of the circuitry of reward-based modulations, but there is no data indicating whether reward can be dissociated from attention or cross-trial forms of perceptual learning. Here we address this issue with a psychophysical dual task, to control attention, while perceptual performance on oriented targets associated with different levels of reward is assessed by measuring both orientation discrimination thresholds and behavioral tuning functions for tilt values near threshold. We found that reward, at any rate, improved performance. However, higher reward rates showed an improvement of orientation discrimination thresholds by about 50% across conditions and sharpened behavioral tuning functions. Data were unaffected by changing the attentional load and by dissociating the feature of the reward cue from the task-relevant feature. These results suggest that reward may act within the span of a single trial independently of attention by modulating the activity of early sensory stages through a improvement of the signal-to-noise ratio of task-relevant channels.

Spatiotopic coding and remapping in humans,Philos Trans R Soc Lond B Biol Sci, 1564 (366), 504-515.

How our perceptual experience of the world remains stable and continuous in the face of continuous rapid eye movements still remains a mystery. This review discusses some recent progress towards understanding the neural and psychophysical processes that accompany these eye movements. We firstly report recent evidence from imaging studies in humans showing that many brain regions are tuned in spatiotopic coordinates, but only for items that are actively attended. We then describe a series of experiments measuring the spatial and temporal phenomena that occur around the time of saccades, and discuss how these could be related to visual stability. Finally, we introduce the concept of the spatio-temporal receptive field to describe the local spatiotopicity exhibited by many neurons when the eyes move.

Spatiotopic Coding of BOLD Signal in Human Visual Cortex Depends on Spatial Attention,PLoS One, 7 (6), e21661.

The neural substrate of the phenomenological experience of a stable visual world remains obscure. One possible mechanism would be to construct spatiotopic neural maps where the response is selective to the position of the stimulus in external space, rather than to retinal eccentricities, but evidence for these maps has been inconsistent. Here we show, with fMRI, that when human subjects perform concomitantly a demanding attentive task on stimuli displayed at the fovea, BOLD responses evoked by moving stimuli irrelevant to the task were mostly tuned in retinotopic coordinates. However, under more unconstrained conditions, where subjects could attend easily to the motion stimuli, BOLD responses were tuned not in retinal but in external coordinates (spatiotopic selectivity) in many visual areas, including MT, MST, LO and V6, agreeing with our previous fMRI study. These results indicate that spatial attention may play an important role in mediating spatiotopic selectivity.

Vision and audition do not share attentional resources in sustained tasks,Front Psychol, (2), 56.

Our perceptual capacities are limited by attentional resources. One important question is whether these resources are allocated separately to each sense or shared between them. We addressed this issue by asking subjects to perform a double task, either in the same modality or in different modalities (vision and audition). The primary task was a multiple object-tracking task (Pylyshyn and Storm, 1988), in which observers were required to track between 2 and 5 dots for 4 s. Concurrently, they were required to identify either which out of three gratings spaced over the interval differed in contrast or, in the auditory version of the same task, which tone differed in frequency relative to the two reference tones. The results show that while the concurrent visual contrast discrimination reduced tracking ability by about 0.7 d’, the concurrent auditory task had virtually no effect. This confirms previous reports that vision and audition use separate attentional resources, consistent with fMRI findings of attentional effects as early as V1 and A1. The results have clear implications for effective design of instrumentation and forms of audio-visual communication devices.

The role of holistic processing in face perception: evidence from the face inversion effect,Vision Res, 11 (51), 1273-1278.

A large body of research supports the hypothesis that the human visual system does not process a face as a collection of separable facial features but as an integrated perceptual whole. One common assumption is that we quickly build holistic representations to extract useful second-order information provided by the variation between the faces of different individuals. An alternative account suggests holistic processing is a fast, early grouping process that first serves to distinguish faces from other competing objects. From this perspective, holistic processing is a quick initial response to the first-order information present in every face. To test this hypothesis we developed a novel paradigm for measuring the face inversion effect, a standard marker of holistic face processing, that measures the minimum exposure time required to discriminate between two stimuli. These new data demonstrate that holistic processing operates on whole upright faces, regardless of whether subjects are required to extract first- or second-level information. In light of this, we argue that holistic processing is a general mechanism that may occur at an earlier stage of face perception than individual discrimination to support the rapid detection of face stimuli in everyday visual scenes.

Visual perception: more than meets the eye,Curr Biol, 4 (21), R159-161.

A recent study shows that objects changing in colour, luminance, size or shape appear to stop changing when they move. These and other compelling illusions provide tantalizing clues about the mechanisms and limitations of object analysis.

Perceived duration of visual and tactile stimuli depends on perceived speed. Front. Integr. Neurosci. 5:51

It is known that the perceived duration of visual stimuli is strongly influenced by speed: faster moving stimuli appear to last longer. To test whether this is a general property of sensory systems we asked participants to reproduce the duration of visual and tactile gratings, and visuo-tactile gratings moving at a variable speed (3.5–15 cm/s) for three different durations (400, 600, and 800 ms). For both modalities, the apparent duration of the stimulus increased strongly with stimulus speed, more so for tactile than for visual stimuli. In addition, visual stimuli were perceived to last approximately 200 ms longer than tactile stimuli. The apparent duration of visuo-tactile stimuli lay between the unimodal estimates, as the Bayesian account predicts, but the bimodal precision of the reproduction did not show the theoretical improvement. A cross-modal speed-matching task revealed that visual stimuli were perceived to move faster than tactile stimuli. To test whether the large difference in the perceived duration of visual and tactile stimuli resulted from the difference in their perceived speed, we repeated the time reproduction task with visual and tactile stimuli matched in apparent speed. This reduced, but did not completely eliminate the difference in apparent duration. These results show that for both vision and touch, perceived duration depends on speed, pointing to common strategies of time perception.

Spatiotopic Visual Maps Revealed by Saccadic Adaptation in Humans, Curr Biol. 2011 Aug 23;21(16):1380-4 

Saccadic adaptation is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system, but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals.

Reduced perceptual sensitivity for biological motion in paraplegia patients,Curr Biol, 22 (21), R910-911. 

Physiological and psychophysical studies suggest that the perception and execution of movement may be linked. Here we ask whether severe impairment of locomotion could impact on the capacity to perceive human locomotion. We measured sensitivity for the perception of point-light walkers – animation sequences of human biological motion portrayed by only the joints – in patients with severe spinal injury. These patients showed a huge (nearly three-fold) reduction of sensitivity for detecting and for discriminating the direction of biological motion compared with healthy controls, and also a smaller (~40%) reduction in sensitivity to simple translational motion. However, there was no statistically significant reduction in contrast sensitivity for discriminating the orientation of static gratings. The results point to an interaction between perceiving and producing motion, implicating shared algorithms and neural mechanisms.