• Full Screen
  • Wide Screen
  • Narrow Screen
  • Increase font size
  • Default font size
  • Decrease font size

Join Us on Facebook

E-mail Print PDF

Roberto Arrighi

Assistant professor in the Neuroscience Department of the University of Florence

Che Uomo!


  • Email: roberto.arrighi (AT) gmail.com
  • Telephone:  +39 050 3153185

Research laboratories

  • CNR Institute of Neuroscience, Pisa
  • Department of Psychology, University of Florence
  • Stella Maris Foundation, Pisa, Italy

Current research and interests

  • Multi-sensory perception
  • Biological Motion
  • Clinical Psychology
  • Temporal Perception



Fornaciai, M., Arrighi, R. & Burr, D. C. (2016). Adaptation-Induced Compression of Event Time Occurs Only for Translational Motion, Scientific Reports, (6), 23341. PDF

Adaptation to fast motion reduces the perceived duration of stimuli displayed at the same location as the adapting stimuli. Here we show that the adaptation-induced compression of time is specific for translational motion. Adaptation to complex motion, either circular or radial, did not affect perceived duration of subsequently viewed stimuli. Adaptation with multiple patches of translating motion caused compression of duration only when the motion of all patches was in the same direction. These results show that adaptation-induced compression of event-time occurs only for uni-directional translational motion, ruling out the possibility that the neural mechanisms of the adaptation occur at early levels of visual processing.

Anobile, G., Arrighi, R., Togoli, I. & Burr, D. C. (2016). A shared numerical representation for action and perception, Elife, (5), PDF

Humans and other species have perceptual mechanisms dedicated to estimating approximate quantity: a sense of number. Here we show a clear interaction between self-produced actions and the perceived numerosity of subsequent visual stimuli. A short period of rapid finger-tapping (without sensory feedback) caused subjects to underestimate the number of visual stimuli presented near the tapping region; and a period of slow tapping caused overestimation. The distortions occurred both for stimuli presented sequentially (series of flashes) and simultaneously (clouds of dots); both for magnitude estimation and forced-choice comparison. The adaptation was spatially selective, primarily in external, real-world coordinates. Our results sit well with studies reporting links between perception and action, showing that vision and action share mechanisms that encode numbers: a generalized number sense, which estimates the number of self-generated as well as external events.


Arrighi, R., Binda, P. & Cicchini, G. M. (2015). Introduction to the Special Issue on Multimodality of Early Sensory Processing: Early Visual Maps Flexibly Encode Multimodal Space, Multisensory Research, 3-4 (28), 249-252. PDF

As living organisms, we have the capability to explore our environments through different senses, each making use of specialized organs and return ing unique information. This is relayed to a set of cortical areas, each of which appears to be specialized for processing information from a single sense — hence the definition of ‘unisensory’ areas. Many models assume that primary unisensory cortices passively reproduce information from each sensory organ; these then project to associative areas, which actively combine multisensory signals with each other and with cognitive stances. By the same token, the textbook view holds that sensory cortices undergo plastic changes only within a limited ‘critical period’; their function and architecture should remain stable and unchangeable thereafter. This model has led to many fundamental discoveries on the architecture of the sensory systems (e.g., oriented receptive fields, binocularity, topographic maps, to name just the best known). However, a growing body of evidence calls for a review of this conceptual scheme. Based on single-cell recordings from non-human primates, fMRI in humans, psychophysics, and sensory deprivation studies, early sensory areas are losing their status of fixed readouts of receptor activity; they are turning into functional nodes in a network of brain areas that flexibly adapts to the statistics of the input and the behavioral goals. This special issue in Multisensory Research aims to cover three such lines of evidence: suggesting that (1) the flexibility of spatial representations, (2) adult plasticity and (3) multimodality, are not properties of associative areas alone, but may depend on the primary visual cortex V1.


Arrighi, R., Togoli, I. & Burr, D. C. (2014). A generalized sense of number, Proc R Soc B (2014) PDF

Much evidence has accumulated to suggest that many animals, including young human infants, possess an abstract sense of approximate quantity, a number sense. Most research has concentrated on apparent numerosity of spatial arrays of dots or other objects, but a truly abstract sense of number should be capable of encoding the numerosity of any set of discrete elements, however displayed and in whatever sensory modality. Here, we use the psychophysical technique ofadaptation to study the sense of number for serially presented items. We show that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events. The adaptation to visual stimuli was spatially selective (in external, not retinal coordinates), pointing to a sensory rather than cognitive process. However, adaptation generalized across modalities, from auditory to visual and vice versa. Adaptation also generalized across formats: adapting to sequential streams of flashes affected the perceived numerosity of spatial arrays. All these results point to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.


Pooresmaeili, A., Arrighi, R., Biagi, L. & Morrone, M. C. (2013). Blood oxygen level-dependent activation of the primary visual cortex predicts size adaptation illusion,J Neurosci, 40 (33), 15999-16008. PDF

In natural scenes, objects rarely occur in isolation but appear within a spatiotemporal context. Here, we show that the perceived size of a stimulus is significantly affected by the context of the scene: brief previous presentation of larger or smaller adapting stimuli at the same region of space changes the perceived size of a test stimulus, with larger adapting stimuli causing the test to appear smaller than veridical and vice versa. In a human fMRI study, we measured the blood oxygen level-dependent activation (BOLD) responses of the primary visual cortex (V1) to the contours of large-diameter stimuli and found that activation closely matched the perceptual rather than the retinal stimulus size: the activated area of V1 increased or decreased, depending on the size of the preceding stimulus. A model based on local inhibitory V1 mechanisms simulated the inward or outward shifts of the stimulus contours and hence the perceptual effects. Our findings suggest that area V1 is actively involved in reshaping our perception to match the short-term statistics of the visual scene.


Cicchini, G. M., Arrighi, R., Cecchetti, L., Giusti M. & Burr, D. C. (2012). Optimal Encoding of Interval Timing in Expert Percussionists, J Neurosci, 3 (32), 1056-1060. PDF

We measured temporal reproduction in human subjects with various levels of musical expertise: expert drummers, string musicians, and non-musicians. While duration reproduction of the non-percussionists showed a characteristic central tendency or regression to the mean, drummers responded veridically. Furthermore, when the stimuli were auditory tones rather than flashes, all subjects responded veridically. The behavior of all three groups in both modalities is well explained by a Bayesian model that seeks to minimize reproduction errors by incorporating a central tendency prior, a probability density function centered at the mean duration of the sample. We measured separately temporal precision thresholds with a bisection task; thresholds were twice as low in drummers as in the other two groups. These estimates of temporal precision, together with an adaptable Bayesian prior, predict well the reproduction results and the central tendency strategy under all conditions and for all subject groups. These results highlight the efficiency and flexibility of sensorimotor mechanisms estimating temporal duration.

Tinelli, T., Cicchini, G.M., Arrighi, R., Tosetti, M., Cioni, G.,  Morrone M. C. (2012). Blindsight in children with congenital and acquired cerebral lesions, Cortex (published online 10 August 2012) PDF

It has been shown that unconscious visual function can survive lesions to optical radiations and/or primary visual cortex (V1), a phenomenon termed “blindsight”. Studies on animal models (cat and monkey) show that the age when the lesion occurs determines the extent of residual visual capacities. Much less is known about the functional and underlying neuronal repercussions of early cortical damage in humans. We measured sensitivity to several visual tasks in four children with congenital unilateral brain lesions that severely affected optic radiations, and in another group of three children with similar lesions, acquired in childhood. In two of the congenital patients, we measured blood oxygenation level dependent (BOLD) activity in response to stimulation of each visual field quadrants. Results show clear evidence of residual unconscious processing of position, orientation and motion of visual stimuli displayed in the scotoma of congenitally lesioned children, but not in the children with acquired lesions. The calcarine cortical BOLD responses were abnormally elicited by stimulation of the ipsilateral visual field and in the scotoma region, demonstrating a profound neuronal reorganization. In conclusion, our data suggest that congenital lesions can trigger massive reorganization of the visual system to alleviate functional effects of early brain insults.


Burr, D. C., Cicchini, G. M., Arrighi, R. & Morrone, M. C. (2011). Spatiotopic selectivity of adaptation-based compression of event duration, J Vis, 2 (11), 21; author reply 21a. PDF

A. Bruno, I. Ayhan, and A. Johnston (2010) have recently challenged our report of spatiotopic selectivity for adaptation of event time (D. Burr, A. Tozzi, & M. C. Morrone, 2007) and also our claim that retinotopic adaptation of event time depends on perceived speed. To assist the reader judge this issue, we present here a mass of data accumulated in our laboratories over the last few years, all confirming our original conclusions. We also point out that where Bruno et al. made experimental measurements (rather than relying on theoretical reasoning), they too find clearly significant spatiotopically tuned adaptation-based compression of event time but of lower magnitude to ours. We speculate on the reasons for the differences in magnitude.

Arrighi, R., Lunardi, R. & Burr, D. (2011). Vision and audition do not share attentional resources in sustained tasks,Front Psychol, (2), 56. PDF

Our perceptual capacities are limited by attentional resources. One important question is whether these resources are allocated separately to each sense or shared between them. We addressed this issue by asking subjects to perform a double task, either in the same modality or in different modalities (vision and audition). The primary task was a multiple object-tracking task (Pylyshyn and Storm, 1988), in which observers were required to track between 2 and 5 dots for 4 s. Concurrently, they were required to identify either which out of three gratings spaced over the interval differed in contrast or, in the auditory version of the same task, which tone differed in frequency relative to the two reference tones. The results show that while the concurrent visual contrast discrimination reduced tracking ability by about 0.7 d', the concurrent auditory task had virtually no effect. This confirms previous reports that vision and audition use separate attentional resources, consistent with fMRI findings of attentional effects as early as V1 and A1. The results have clear implications for effective design of instrumentation and forms of audio-visual communication devices.

Arrighi, R., Cartocci, G. & Burr, D. (2011). Reduced perceptual sensitivity for biological motion in paraplegia patients,Curr Biol, 22 (21), R910-911. PDF

Physiological and psychophysical studies suggest that the perception and execution of movement may be linked. Here we ask whether severe impairment of locomotion could impact on the capacity to perceive human locomotion. We measured sensitivity for the perception of point-light walkers – animation sequences of human biological motion portrayed by only the joints – in patients with severe spinal injury. These patients showed a huge (nearly three-fold) reduction of sensitivity for detecting and for discriminating the direction of biological motion compared with healthy controls, and also a smaller (~40%) reduction in sensitivity to simple translational motion. However, there was no statistically significant reduction in contrast sensitivity for discriminating the orientation of static gratings. The results point to an interaction between perceiving and producing motion, implicating shared algorithms and neural mechanisms.



Giacomelli, G., Volpe, R., Virgili, G., Farini, A., Arrighi, R., Tarli-Barbieri, C., et al. (2010). Contrast reduction and reading: assessment and reliability with the Reading Explorer test,Eur J Ophthalmol, 2 (20), 389-396. PDF

PURPOSE: To investigate the reliability of the Reading Explorer (REX) charts and to assess the impact of text contrast reduction (1.5 cycle/degree) on reading speed in subjects with normal and low vision. METHODS: Standard visual acuity (ETDRS charts), reading speed (MNread charts), and contrast sensitivity (Pelli-Robson charts) measurements were obtained in 3 groups of subjects stratified by visual acuity level in the better eye from 0.0 to 1.0 logMAR, with intermediate cutoffs at 0.3 and 0.6 logMAR. Measurements of reading speed for decreasing levels of text contrast were obtained with the REX charts using a 1.5 cycle/degree text. RESULTS: Since in many patients with lower vision a plateau of maximum reading speed across different levels of text contrast was not found, reliability indexes were computed for average reading speed and reading contrast threshold. In the group with lower visual acuity, 95% limits of agreement were +/-0.134 log word/minute and +/-0.175 log contrast sensitivity, suggesting good reliability. The proportion of subjects with a 20% loss of reading speed from 90% to 45% text contrast was estimated to be 1/3 at 0.6 logMAR visual acuity level and 2/3 at 1.0 logMAR. CONCLUSIONS: The adverse effect of decreased text contrast, which may be found in ordinary reading material, on the reading performance of subjects with advanced and initial low vision is probably underestimated. The REX test proved to be a reliable investigation tool for this phenomenon.


Guzzetta, A., Tinelli, F., Del Viva, M. M., Bancale, A., Arrighi, R., Pascale, R. R., et al. (2009). Motion perception in preterm children: role of prematurity and brain damage,Neuroreport, 15 (20), 1339-1343. PDF

We tested 26 school-aged children born preterm at a gestational age below 34 weeks, 13 with and 13 without periventricular brain damage, with four different visual stimuli assessing perception of pure global motion (optic flow), with some form information (segregated translational motion) and form-defined static stimuli. Results were compared with a group of age-matched healthy term-born controls. Preterm children with brain damage showed significantly lower sensitivities relative to full-term controls in all four tests, whereas those without brain damage were significantly worse than controls only for the pure motion stimuli. Furthermore, when form information was embedded in the stimulus, preterm children with brain lesions scored significantly worse than those without lesions. These results suggest that in preterm children dorsal stream-related functions are impaired irrespective of the presence of brain damage, whereas deficits of the ventral stream are more related to the presence of periventricular brain damage.

Arrighi, R., Arecchi, F. T., Farini, A. & Gheri, C. (2009). Cueing the interpretation of a Necker Cube: a way to inspect fundamental cognitive processes,Cogn Process, (10 Suppl 1), S95-99.PDF

The term perceptual bistability refers to all those conditions in which an observer looks at an ambiguous stimulus that can have two or more distinct but equally reliable interpretations. In this work, we investigate perception of Necker Cube in which bistability consists of the possibility to interpret the cube depth in two different ways. We manipulated the cube ambiguity by darkening one of the cube faces (cue) to provide a clear cube interpretation due to the occlusion depth index. When the position of the cue is stationary the cube perceived perspective is steady and driven by the cue position. However, when we alternated in time the cue position (i.e. we changed the position of the darkened cube face) two different perceptual phenomena occurred: for low frequencies the cube perspective alternated in line with the position of the cue; however for high frequencies the cue was no longer able to bias the perception but it appears as a floating feature traveling across the solid with the cube whole perspective that returns to be bistable as in the conventional, bias-free, case.

Arrighi, R., Marini, F. & Burr, D. (2009). Meaningful auditory information enhances perception of visual biological motion,J Vis, 4 (9), 25 21-27. PDF

Robust perception requires efficient integration of information from our various senses. Much recent electrophysiology points to neural areas responsive to multisensory stimulation, particularly audiovisual stimulation. However, psychophysical evidence for functional integration of audiovisual motion has been ambiguous. In this study we measure perception of an audiovisual form of biological motion, tap dancing. The results show that the audio tap information interacts with visual motion information, but only when in synchrony, demonstrating a functional combination of audiovisual information in a natural task. The advantage of multimodal combination was better than the optimal maximum likelihood prediction.


Alais, D., Lorenceau, J., Arrighi, R. & Cass, J. (2006). Contour interactions between pairs of Gabors engaged in binocular rivalry reveal a map of the association field,Vision Res, 8-9 (46), 1473-1487. PDF

A psychophysical study was conducted to investigate contour interactions (the 'association field'). Two Gabor patches were presented to one eye, with random-dot patches in corresponding locations of the other eye so as to produce binocular rivalry. Perceptual alternations of the two rivalry processes were monitored continuously by observers and the two time series were cross-correlated. The Gabors were oriented collinearly, obliquely, or orthogonally, and spatial separation was varied. A parallel condition was also included. Correlation between the rivalry processes strongly depended on separation and relative orientation. Correlations between adjacent collinear Gabors was near-perfect and reduced with spatial separation and as relative orientation departed from collinear. Importantly, variations in cross-correlation did not alter the rivalry processes (average dominance duration, and therefore alternation rate, was constant across conditions). Instead, synchronisation of rivalry oscillations accounts for the correlation variations: rivalry alternations were highly synchronised when contour interactions were strong and were poorly synchronised when contour interactions were weak. The level of synchrony between these two stochastic processes, in depending on separation and relative orientation, effectively reveals a map of the association field. These association fields are not greatly affected by contrast, and can be demonstrated between contours that are presented to separate hemispheres.

Arrighi, R., Alais, D. & Burr, D. (2006). Perceptual synchrony of audiovisual streams for natural and artificial motion sequences,J Vis, 3 (6), 260-268. PDF

We investigated the conditions necessary for perceptual simultaneity of visual and auditory stimuli under natural conditions: video sequences of conga drumming at various rhythms. Under most conditions, the auditory stream needs to be delayed for sight and sound to be perceived simultaneously. The size of delay for maximum perceived simultaneity varied inversely with drumming tempo, from about 100 ms at 1 Hz to 30 ms at 4 Hz. Random drumming motion produced similar results, with higher random tempos requiring less delay. Video sequences of disk stimuli moving along a motion profile matched to the drummer produced near-identical results. When the disks oscillated at constant speed rather than following "biological" speed variations, the delays necessary for perceptual synchrony were systematically less. The results are discussed in terms of real-world constraints for perceptual synchrony and possible neural mechanisms.


Arrighi, R., Alais, D. & Burr, D. (2005). Neural latencies do not explain the auditory and audio-visual flash-lag effect,Vision Res, 23 (45), 2917-2925. PDF

A brief flash presented physically aligned with a moving stimulus is perceived to lag behind, a well studied phenomenon termed the Flash-Lag Effect (FLE). It has been recently shown that the FLE also occurs in audition, as well as cross-modally between vision and audition. The present study has two goals: to investigate the acoustic and cross-modal FLE using a random motion technique; and to investigate whether neural latencies may account for the FLE in general. The random motion technique revealed a strong cross-modal FLE for visual motion stimuli and auditory probes, but not for the other conditions. Visual and auditory latencies for stimulus appearance and for motion were measured with three techniques: integration, temporal alignment and reaction times. All three techniques showed that a brief static acoustic stimulus is perceived more rapidly than a brief static visual stimulus, while a sound source in motion is perceived more slowly than a comparable visual stimulus. While the results of these three techniques agreed closely with each other, they were exactly opposite that required to account for the FLE by neural latencies. We conclude that neural latencies do not, in general, explain the flash-lag effect. Rather, our data suggest that neural integration times are more important.

Arrighi, R., Alais, D. & Burr, D. (2005). Perceived timing of first- and second-order changes in vision and hearing,Exp Brain Res, 3-4 (166), 445-454. PDF

Simultaneous changes in visual stimulus attributes (such as motion or color) are often perceived to occur at different times, a fact usually attributed to differences in neural processing times of those attributes. However, other studies suggest that perceptual misalignments are not due to stimulus attributes, but to the type of change, first- or second-order. To test whether this idea generalizes across modalities, we studied perceptual synchrony of acoustic and of audiovisual cross-modal stimuli, which varied in a first- or second-order fashion. First-order changes were abrupt changes in tone intensity or frequency (auditory), or spatial position (visual), while second-order changes were an inversion of the direction of change, such as a turning point when a rising tone starts falling or a translating visual blob reverses. For both pure acoustic and cross-modal stimuli, first-order changes were systematically perceived before second-order changes. However, when both changes were first-order, or both were second-order, little or no difference in perceptual delay was found between them, regardless of attribute or modality. This shows that the type of attribute change, as well as latency differences, is a strong determinant of subjective temporal alignments. We also performed an analysis of reaction times (RTs) to the first- and second-order attribute changes used in these temporal alignment experiments. RT differences between these stimuli did not correspond with our temporal alignment data, suggesting that subjective alignments cannot be accounted for by a simple latency-based explanation.

You are here: People Faculty Roberto Arrighi