• Full Screen
  • Wide Screen
  • Narrow Screen
  • Increase font size
  • Default font size
  • Decrease font size

Join Us on Facebook

David Burr

E-mail Print PDF

David Burr

Professor of Physiological Psychology, University of Florence


  • Email: Dave (AT) in.cnr.it
  • Telephone:  +39 050 3153175
  • Cell: +39 348 3972 198

Research laboratories

  • CNR Institute of Neuroscience, Pisa
  • Department of Psychology, University of Florence
  • Stella Maris Foundation, Pisa, Italy

Current research and interests

  • Motion perception
  • Numerosity perception
  • Visual stability
  • Spatiotopicity
  • Perception of time
  • Multi-sensory perception
  • Autism


-2016-2015-2014-2013-2012-2011-2010-2009-2008-2007-2006-Extended List


2017 (back to top)

Mikellidou, K., Turi, M. & Burr, D. C. (2017). Spatiotopic coding during dynamic head tilt, J Neurophysiol, 2 (117), 808-817. PDF

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of approximately 42 degrees between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.

Karaminis, T., Lunghi, C., Neil, L., Burr, D. & Pellicano, E. (2017). Binocular rivalry in children on the autism spectrum, Autism Res, PDF

When different images are presented to the eyes, the brain is faced with ambiguity, causing perceptual bistability: visual perception continuously alternates between the monocular images, a phenomenon called binocular rivalry. Many models of rivalry suggest that its temporal dynamics depend on mutual inhibition among neurons representing competing images. These models predict that rivalry should be different in autism, which has been proposed to present an atypical ratio of excitation and inhibition [the E/I imbalance hypothesis; Rubenstein & Merzenich, 2003]. In line with this prediction, some recent studies have provided evidence for atypical binocular rivalry dynamics in autistic adults. In this study, we examined if these findings generalize to autistic children. We developed a child-friendly binocular rivalry paradigm, which included two types of stimuli, low- and high-complexity, and compared rivalry dynamics in groups of autistic and age- and intellectual ability-matched typical children. Unexpectedly, the two groups of children presented the same number of perceptual transitions and the same mean phase durations (times perceiving one of the two stimuli). Yet autistic children reported mixed percepts for a shorter proportion of time (a difference which was in the opposite direction to previous adult studies), while elevated autistic symptomatology was associated with shorter mixed perception periods. Rivalry in the two groups was affected similarly by stimulus type, and consistent with previous findings. Our results suggest that rivalry dynamics are differentially affected in adults and developing autistic children and could be accounted for by hierarchical models of binocular rivalry, including both inhibition and top-down influences.

Karaminis, T., Neil, L., Manning, C., Turi, M., Fiorentini, C., Burr, D., et al. (2017). Ensemble perception of emotions in autistic and typical children and adolescents, Developmental Cognitive Neuroscience, (24), 51-62. PDF

Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an ‘ensemble’ emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average.

Carter, O., Bennett, D., Nash, T., Arnold, S., Brown, L., Cai, R. Y., et al. (2017). Sensory integration deficits support a dimensional view of psychosis and are not limited to schizophrenia, Transl Psychiatry, 5 (7), e1118. PDF

Visual dysfunction is commonplace in schizophrenia and occurs alongside cognitive, psychotic and affective symptoms of the disorder. Psychophysical evidence suggests that this dysfunction results from impairments in the integration of low-level neural signals into complex cortical representations, which may also be associated with symptom formation. Despite the symptoms of schizophrenia occurring in a range of disorders, the integration deficit has not been tested in broader patient populations. Moreover, it remains unclear whether such deficits generalize across other sensory modalities. The present study assessed patients with a range of psychotic and nonpsychotic disorders and healthy controls on visual contrast detection, visual motion integration, auditory tone detection and auditory tone integration. The sample comprised a total of 249 participants (schizophrenia spectrum disorder n=98; bipolar affective disorder n=35; major depression n=31; other psychiatric conditions n=31; and healthy controls n=54), of whom 178 completed one or more visual task and 71 completed auditory tasks. Compared with healthy controls and nonpsychotic patients, psychotic patients trans-diagnostically were impaired on both visual and auditory integration, but unimpaired in simple visual or auditory detection. Impairment in visual motion integration was correlated with the severity of positive symptoms, and could not be accounted for by a reduction in processing speed, inattention or medication effects. Our results demonstrate that impaired sensory integration is not specific to schizophrenia, as has previously been assumed. Instead, sensory deficits are closely related to the presence of positive symptoms independent of diagnosis. The finding that equivalent integrative sensory processing is impaired in audition is consistent with hypotheses that propose a generalized deficit of neural integration in psychotic disorders.

Croydon, A., Karaminis, T., Neil, L., Burr, D. & Pellicano, E. (2017). The light-from-above prior is intact in autistic children, J Exp Child Psychol, (161), 113-125. PDF

Sensory information is inherently ambiguous. The brain disambiguates this information by anticipating or predicting the sensory environment based on prior knowledge. Pellicano and Burr (2012) proposed that this process may be atypical in autism and that internal assumptions, or "priors," may be underweighted or less used than in typical individuals. A robust internal assumption used by adults is the "light-from-above" prior, a bias to interpret ambiguous shading patterns as if formed by a light source located above (and slightly to the left) of the scene. We investigated whether autistic children (n=18) use this prior to the same degree as typical children of similar age and intellectual ability (n=18). Children were asked to judge the shape (concave or convex) of a shaded hexagon stimulus presented in 24 rotations. We estimated the relation between the proportion of convex judgments and stimulus orientation for each child and calculated the light source location most consistent with those judgments. Children behaved similarly to adults in this task, preferring to assume that the light source was from above left, when other interpretations were compatible with the shading evidence. Autistic and typical children used prior assumptions to the same extent to make sense of shading patterns. Future research should examine whether this prior is as adaptable (i.e., modifiable with training) in autistic children as it is in typical adults.

Gori, M., Chilosi, A., Forli, F. & Burr, D. (2017). Audio-visual temporal perception in children with restored hearing, Neuropsychologia, (99), 350-359. PDF

It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development.

Kagan, I. & Burr, D. C. (2017). Active Vision: Dynamic Reformatting of Visual Information by the Saccade-Drift Cycle, Curr Biol, 9 (27), R341-R344. PDF

Visual processing depends on rapid parsing of global features followed by analysis of fine detail. A new study suggests that this transformation is enabled by a cycle of saccades and fixational drifts, which reformat visual input to match the spatiotemporal sensitivity of fast and slow neuronal pathways.

2016 (back to top)

Shi, Z. & Burr, D. (2016). Predictive coding of multisensory timing, Current Opinion in Behavioral Sciences, (8), 200-206. PDF

The sense of time is foundational for perception and action, yet it frequently departs significantly from physical time. In the paper we review recent progress on temporal contextual effects, multisensory temporal integration, temporal recalibration, and related computational models. We suggest that subjective time arises from minimizing prediction errors and adaptive recalibration, which can be unified in the framework of predictive coding, a framework rooted in Helmholtz's ‘perception as inference’.

Turi, M., Karaminis, T., Pellicano, E. & Burr, D. (2016). No rapid audiovisual recalibration in adults on the autism spectrum, Scientific Reports, (6), 21756. PDF

Autism spectrum disorders (ASD) are characterized by difficulties in social cognition, but are also associated with atypicalities in sensory and perceptual processing. Several groups have reported that autistic individuals show reduced integration of socially relevant audiovisual signals, which may contribute to the higher-order social and cognitive difficulties observed in autism. Here we use a newly devised technique to study instantaneous adaptation to audiovisual asynchrony in autism. Autistic and typical participants were presented with sequences of brief visual and auditory stimuli, varying in asynchrony over a wide range, from 512?ms auditory-lead to 512?ms auditory-lag, and judged whether they seemed to be synchronous. Typical adults showed strong adaptation effects, with trials proceeded by an auditory-lead needing more auditory-lead to seem simultaneous, and vice versa. However, autistic observers showed little or no adaptation, although their simultaneity curves were as narrow as the typical adults. This result supports recent Bayesian models that predict reduced adaptation effects in autism. As rapid audiovisual recalibration may be fundamental for the optimisation of speech comprehension, recalibration problems could render language processing more difficult in autistic individuals, hindering social communication.

Fornaciai, M., Arrighi, R. & Burr, D. C. (2016). Adaptation-Induced Compression of Event Time Occurs Only for Translational Motion, Scientific Reports, (6), 23341. PDF

Adaptation to fast motion reduces the perceived duration of stimuli displayed at the same location as the adapting stimuli. Here we show that the adaptation-induced compression of time is specific for translational motion. Adaptation to complex motion, either circular or radial, did not affect perceived duration of subsequently viewed stimuli. Adaptation with multiple patches of translating motion caused compression of duration only when the motion of all patches was in the same direction. These results show that adaptation-induced compression of event-time occurs only for uni-directional translational motion, ruling out the possibility that the neural mechanisms of the adaptation occur at early levels of visual processing.

Fornaciai, M., Cicchini, G. M. & Burr, D. C. (2016). Adaptation to number operates on perceived rather than physical numerosity, Cognition, (151), 63-67.PDF

Humans share with many animals a number sense, the ability to estimate rapidly the approximate number of items in a scene. Recent work has shown that like many other perceptual attributes, numerosity is susceptible to adaptation. It is not clear, however, whether adaptation works directly on mechanisms selective to numerosity, or via related mechanisms, such as those tuned to texture density. To disentangle this issue we measured adaptation of numerosity of 10 pairs of connected dots, as connecting dots makes them appear to be less numerous than unconnected dots. Adaptation to a 20-dot pattern (same number of dots as the test) caused robust reduction in apparent numerosity of the connected-dot pattern, but not of the unconnected dot-pattern. This suggests that adaptation to numerosity, at least for relatively sparse dot-pattern, occurs at neural levels encoding perceived numerosity, rather than at lower levels responding to the number of elements in the scene.

Zimmermann, E., Morrone, M. C. & Burr, D. (2016). Adaptation to size affects saccades with long but not short latencies, J Vis, 7 (16), 2. PDF

Maintained exposure to a specific stimulus property-such as size, color, or motion-induces perceptual adaptation aftereffects, usually in the opposite direction to that of the adaptor. Here we studied how adaptation to size affects perceived position and visually guided action (saccadic eye movements) to that position. Subjects saccaded to the border of a diamond-shaped object after adaptation to a smaller diamond shape. For saccades in the normal latency range, amplitudes decreased, consistent with saccading to a larger object. Short-latency saccades, however, tended to be affected less by the adaptation, suggesting that they were only partly triggered by a signal representing the illusory target position. We also tested size perception after adaptation, followed by a mask stimulus at the probe location after various delays. Similar size adaptation magnitudes were found for all probe-mask delays. In agreement with earlier studies, these results suggest that the duration of the saccade latency period determines the reference frame that codes the probe location.

Vercillo, T., Burr, D. & Gori, M. (2016). Early visual deprivation severely compromises the auditory sense of space in congenitally blind children, Dev Psychol, 6 (52), 847-853. PDF

A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time.

Karaminis, T., Cicchini, G. M., Neil, L., Cappagli, G., Aagten-Murphy, D., Burr, D., et al. (2016). Central tendency effects in time interval reproduction in autism, Sci Rep, (6), 28570. PDF

Central tendency, the tendency of judgements of quantities (lengths, durations etc.) to gravitate towards their mean, is one of the most robust perceptual effects. A Bayesian account has recently suggested that central tendency reflects the integration of noisy sensory estimates with prior knowledge representations of a mean stimulus, serving to improve performance. The process is flexible, so prior knowledge is weighted more heavily when sensory estimates are imprecise, requiring more integration to reduce noise. In this study we measure central tendency in autism to evaluate a recent theoretical hypothesis suggesting that autistic perception relies less on prior knowledge representations than typical perception. If true, autistic children should show reduced central tendency than theoretically predicted from their temporal resolution. We tested autistic and age- and ability-matched typical children in two child-friendly tasks: (1) a time interval reproduction task, measuring central tendency in the temporal domain; and (2) a time discrimination task, assessing temporal resolution. Central tendency reduced with age in typical development, while temporal resolution improved. Autistic children performed far worse in temporal discrimination than the matched controls. Computational simulations suggested that central tendency was much less in autistic children than predicted by theoretical modelling, given their poor temporal resolution.

Anobile, G., Castaldi, E., Turi, M., Tinelli, F. & Burr, D. C. (2016). Numerosity but not texture-density discrimination correlates with math ability in children, Dev Psychol, 8 (52), 1206-1216. PDF

Considerable recent work suggests that mathematical abilities in children correlate with the ability to estimate numerosity. Does math correlate only with numerosity estimation, or also with other similar tasks? We measured discrimination thresholds of school-age (6- to 12.5-years-old) children in 3 tasks: numerosity of patterns of relatively sparse, segregatable items (24 dots); numerosity of very dense textured patterns (250 dots); and discrimination of direction of motion. Thresholds in all tasks improved with age, but at different rates, implying the action of different mechanisms: In particular, in young children, thresholds were lower for sparse than textured patterns (the opposite of adults), suggesting earlier maturation of numerosity mechanisms. Importantly, numerosity thresholds for sparse stimuli correlated strongly with math skills, even after controlling for the influence of age, gender and nonverbal IQ. However, neither motion-direction discrimination nor numerosity discrimination of texture patterns showed a significant correlation with math abilities. These results provide further evidence that numerosity and texture-density are perceived by independent neural mechanisms, which develop at different rates; and importantly, only numerosity mechanisms are related to math. As developmental dyscalculia is characterized by a profound deficit in discriminating numerosity, it is fundamental to understand the mechanism behind the discrimination.

Cicchini, G. M., Anobile, G. & Burr, D. C. (2016). Spontaneous perception of numerosity in humans, Nat Commun, (7), 12536. PDF

Humans, including infants, and many other species have a capacity for rapid, nonverbal estimation of numerosity. However, the mechanisms for number perception are still not clear; some maintain that the system calculates numerosity via density estimates-similar to those involved in texture-while others maintain that more direct, dedicated mechanisms are involved. Here we show that provided that items are not packed too densely, human subjects are far more sensitive to numerosity than to either density or area. In a two-dimensional space spanning density, area and numerosity, subjects spontaneously react with far greater sensitivity to changes in numerosity, than either area or density. Even in tasks where they were explicitly instructed to make density or area judgments, they responded spontaneously to number. We conclude, that humans extract number information, directly and spontaneously, via dedicated mechanisms.

Anobile, G., Arrighi, R., Togoli, I. & Burr, D. C. (2016). A shared numerical representation for action and perception, Elife, (5), PDF

Humans and other species have perceptual mechanisms dedicated to estimating approximate quantity: a sense of number. Here we show a clear interaction between self-produced actions and the perceived numerosity of subsequent visual stimuli. A short period of rapid finger-tapping (without sensory feedback) caused subjects to underestimate the number of visual stimuli presented near the tapping region; and a period of slow tapping caused overestimation. The distortions occurred both for stimuli presented sequentially (series of flashes) and simultaneously (clouds of dots); both for magnitude estimation and forced-choice comparison. The adaptation was spatially selective, primarily in external, real-world coordinates. Our results sit well with studies reporting links between perception and action, showing that vision and action share mechanisms that encode numbers: a generalized number sense, which estimates the number of self-generated as well as external events.

Aagten-Murphy, D. & Burr, D. (2016). Adaptation to numerosity requires only brief exposures, and is determined by number of events, not exposure duration, J Vis, 10 (16), 22. PDF

Exposure to a patch of dots produces a repulsive shift in the perceived numerosity of subsequently viewed dot patches. Although a remarkably strong effect, in which the perceived numerosity can be shifted by up to 50% of the actual numerosity, very little is known about the temporal dynamics. Here we demonstrate a novel adaptation paradigm that allows numerosity adaptation to be rapidly induced at several distinct locations simultaneously. We show that not only is this adaptation to numerosity spatially specific, with different locations of the visual field able to be adapted to high, low, or neutral stimuli, but it can occur with only very brief periods of adaptation. Further investigation revealed that the adaptation effect was primarily driven by the number of unique adapting events that had occurred and not by either the duration of each event or the total duration of exposure to adapting stimuli. This event-based numerosity adaptation appears to fit well with statistical models of adaptation in which the dynamic adjustment of perceptual experiences, based on both the previous experience of the stimuli and the current percept, acts to optimize the limited working range of perception. These results implicate a highly plastic mechanism for numerosity perception, which is dependent on the number of discrete adaptation events, and also demonstrate a quick and efficient paradigm suitable for examining the temporal properties of adaptation.

Castaldi, E., Aagten-Murphy, D., Tosetti, M., Burr, D. & Morrone, M. C. (2016). Effects of adaptation on numerosity decoding in the human brain, Neuroimage, (143), 364-377. PDF

Psychophysical studies have shown that numerosity is a sensory attribute susceptible to adaptation. Neuroimaging studies have reported that, at least for relatively low numbers, numerosity can be accurately discriminated in the intra-parietal sulcus. Here we developed a novel rapid adaptation paradigm where adapting and test stimuli are separated by pauses sufficient to dissociate their BOLD activity. We used multivariate pattern recognition to classify brain activity evoked by non-symbolic numbers over a wide range (20-80), both before and after psychophysical adaptation to the highest numerosity. Adaptation caused underestimation of all lower numerosities, and decreased slightly the average BOLD responses in V1 and IPS. Using support vector machine, we showed that the BOLD response of IPS, but not in V1, classified numerosity well, both when tested before and after adaptation. However, there was no transfer from training pre-adaptation responses to testing post-adaptation, and vice versa, indicating that adaptation changes the neuronal representation of the numerosity. Interestingly, decoding was more accurate after adaptation, and the amount of improvement correlated with the amount of perceptual underestimation of numerosity across subjects. These results suggest that numerosity adaptation acts directly on IPS, rather than indirectly via other low-level stimulus parameters analysis, and that adaptation improves the capacity to discriminate numerosity.

Taubert, J., Alais D., Burr, D. (2016). Different coding strategies for the perception of stable and changeable facial attributes, Sci. Rep., 6. PDF

Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.

Mikellidou, K., Turi, M. & Burr, D. C. (2016). Spatiotopic maps during dynamic head tilt, J Neurophysiol, jn 00508 02016. PDF

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head and body, across multiple planes. Whereas visual stability in the face of saccadic eye-movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye-movements, and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional Motion Aftereffect (PMAE: the apparent change in position after adaptation to motion) with head-tilts of ~42 degrees between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, while the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.

2015 (back to top)

Karaminis, T., Turi, M., Neil, L., Badcock, N. A., Burr, D. & Pellicano, E. (2015). Atypicalities in perceptual adaptation in autism do not extend to perceptual causality,PLoS One, 3 (10), e0120439. PDF

A recent study showed that adaptation to causal events (collisions) in adults caused subsequent events to be less likely perceived as causal. In this study, we examined if a similar negative adaptation effect for perceptual causality occurs in children, both typically developing and with autism. Previous studies have reported diminished adaptation for face identity, facial configuration and gaze direction in children with autism. To test whether diminished adaptive coding extends beyond high-level social stimuli (such as faces) and could be a general property of autistic perception, we developed a child-friendly paradigm for adaptation of perceptual causality. We compared the performance of 22 children with autism with 22 typically developing children, individually matched on age and ability (IQ scores). We found significant and equally robust adaptation aftereffects for perceptual causality in both groups. There were also no differences between the two groups in their attention, as revealed by reaction times and accuracy in a change-detection task. These findings suggest that adaptation to perceptual causality in autism is largely similar to typical development and, further, that diminished adaptive coding might not be a general characteristic of autism at low levels of the perceptual hierarchy, constraining existing theories of adaptation in autism.

Aagten-Murphy, D., Attucci, C., Daniel, N., Klaric, E., Burr, D. & Pellicano, E. (2015). Numerical estimation in children with autism,Autism Res, PDF

Number skills are often reported anecdotally and in the mass media as a relative strength for individuals with autism, yet there are remarkably few research studies addressing this issue. This study, therefore, sought to examine autistic children's number estimation skills and whether variation in these skills can explain at least in part strengths and weaknesses in children's mathematical achievement. Thirty-two cognitively able children with autism (range = 8-13 years) and 32 typical children of similar age and ability were administered a standardized test of mathematical achievement and two estimation tasks, one psychophysical nonsymbolic estimation (numerosity discrimination) task and one symbolic estimation (numberline) task. Children with autism performed worse than typical children on the numerosity task, on the numberline task, which required mapping numerical values onto space, and on the test of mathematical achievement. These findings question the widespread belief that mathematical skills are generally enhanced in autism. For both groups of children, variation in performance on the numberline task was also uniquely related to their academic achievement, over and above variation in intellectual ability; better number-to-space mapping skills went hand-in-hand with better arithmetic skills. Future research should further determine the extent and underlying causes of some autistic children's difficulties with regards to number. Autism Res 2015. (c) 2015 International Society for Autism Research, Wiley Periodicals, Inc.

Anobile, G., Turi, M., Cicchini, G. M. & Burr, D. (2015). Mechanisms for perception of numerosity or texture-density are governed by crowding-like effects,Journal of Vision, 15(5), 1-12. PDF

We have recently provided evidence that the perception of number and texture density is mediated by two independent mechanisms: numerosity mechanisms at relatively low numbers, obeying Weber’s law, and texture-density mechanisms at higher numerosities, following a square root law. In this study we investigated whether the switch between the two mechanisms depends on the capacity to segregate individual dots, and therefore follows similar laws to those governing visual crowding. We measured numerosity discrimination for a wide range of numerosities at three eccentricities. We found that the point where the numerosity regime (Weber’s law) gave way to the density regime (square root law) depended on eccentricity. In central vision, the regime changed at 2.3 dots/82, while at 158 eccentricity, it changed at 0.5 dots/82, three times less dense. As a consequence, thresholds for low numerosities increased with eccentricity, while at higher numerosities thresholds remained constant. We further showed that like crowding, the regime change was independent of dot size, depending on distance between dot centers, not distance between dot edges or ink coverage. Performance was not affected by stimulus contrast or blur, indicating that the transition does not depend on low-level stimulus properties. Our results reinforce the notion that numerosity and texture are mediated by two distinct processes, depending on whether the individual elements are perceptually segregable. Which mechanism is engaged follows laws that determine crowding.

Tinelli, F., Anobile, G., Gori, M., Aagten-Murphy, D., Bartoli, M., Burr, D. C., et al. Time, number and attention in very low birth weight children,Neuropsychologia, 2015 PDF

Abstract Premature birth has been associated with damage in many regions of the cerebral cortex, although there is a particularly strong susceptibility for damage within the parieto-occipital lobes (Volpe, 2009). As these areas have been shown to be critical for both visual attention and magnitudes perception (time, space, and number), it is important to investigate the impact of prematurity on both the magnitude and attentional systems, particularly for children without overt white matter injuries, where the lack of obvious injury may cause their difficulties to remain unnoticed. In this study, we investigated the ability to judge time intervals (visual, audio and audio-visual temporal bisection), discriminate between numerical quantities (numerosity comparison), map numbers onto space (numberline task) and to maintain visuo-spatial attention (multiple-object-tracking) in school-age preterm children (N29). The results show that various parietal functions may be more or less robust to prematurity-related difficulties, with strong impairments found on time estimation and attentional task, while numerical discrimination or mapping tasks remained relatively unimpaired. Thus while our study generally supports the hypothesis of a dorsal stream vulnerability in children born preterm relative to other cortical locations, it further suggests that particular cognitive processes, as highlighted by performance on different tasks, are far more susceptible than others.

Turi, M., Burr, D. C., Igliozzi, R., Aagten-Murphy, D., Muratori, F. & Pellicano, E. (2015). Children with autism spectrum disorder show reduced adaptation to number,Proceedings of the National Academy of Sciences, 112(25): 7868-7872. PDF

Autism is known to be associated with major perceptual atypicalities. We have recently proposed a general model to account for these atypicalities in Bayesian terms, suggesting that autistic individuals underuse predictive information or priors. We tested this idea by measuring adaptation to numerosity stimuli in children diagnosed with autism spectrum disorder (ASD). After exposure to large numbers of items, stimuli with fewer items appear to be less numerous (and vice versa). We found that children with ASD adapted much less to numerosity than typically developing children, although their precision for numerosity discrimination was similar to that of the typical group. This result reinforces recent findings showing reduced adaptation to facial identity in ASD and goes on to show that reduced adaptation is not unique to faces (social stimuli with special significance in autism), but occurs more generally, for both parietal and temporal functions, probably reflecting inefficiencies in the adaptive interpretation of sensory signals. These results provide strong support for the Bayesian theories of autism.

Greco, V., Frijia, F., Mikellidou, K., Montanaro, D., Farini, A., D'Uva, M., et al. (2015). A low-cost and versatile system for projecting wide-field visual stimuli within fMRI scanners,Behav Res Methods, PDF

We have constructed and tested a custom-made magnetic-imaging-compatible visual projection system designed to project on a very wide visual field (~80 degrees ). A standard projector was modified with a coupling lens, projecting images into the termination of an image fiber. The other termination of the fiber was placed in the 3-T scanner room with a projection lens, which projected the images relayed by the fiber onto a screen over the head coil, viewed by a participant wearing magnifying goggles. To validate the system, wide-field stimuli were presented in order to identify retinotopic visual areas. The results showed that this low-cost and versatile optical system may be a valuable tool to map visual areas in the brain that process peripheral receptive fields.

Mikellidou, K., Cicchini, G. M., Thompson, P. G. & Burr, D. C. (2015). The oblique effect is both allocentric and egocentric,Journal of Vision, 8 (15), 24-24. PDF

Despite continuous movements of the head, humans maintain a stable representation of the visual world, which seems to remain always upright. The mechanisms behind this stability are largely unknown. To gain some insight on how head tilt affects visual perception, we investigate whether a well-known orientation-dependent visual phenomenon, the oblique effect—superior performance for stimuli at cardinal orientations (0° and 90°) compared with oblique orientations (45°)—is anchored in egocentric or allocentric coordinates. To this aim, we measured orientation discrimination thresholds at various orientations for different head positions both in body upright and in supine positions. We report that, in the body upright position, the oblique effect remains anchored in allocentric coordinates irrespective of head position. When lying supine, gravitational effects in the plane orthogonal to gravity are discounted. Under these conditions, the oblique effect was less marked than when upright, and anchored in egocentric coordinates. The results are well explained by a simple “compulsory fusion” model in which the head-based and the gravity-based signals are combined with different weightings (30% and 70%, respectively), even when this leads to reduced sensitivity in orientation discrimination.

Anobile, G., Cicchini, G. M. & Burr, D. C. (2015). Number as a primary perceptual attribute: a review, Perception 1-27 DOI: 10.1177/0301006615602599. PDF

Although humans are the only species to possess language-driven abstract mathematical capacities, we share with many other animals a nonverbal capacity for estimating quantities or numerosity. For some time, researchers have clearly differentiated between small numbers of items—less than about four—referred to as the subitizing  range, and larger numbers, where counting or estimation is required. In this review, we examine more recent evidence suggesting a further division, between sets of items greater than the subitizing range, but sparse enough to be individuated as single items; and densely packed stimuli, where they crowd each other into what is betterconsidered as a texture. These two different regimes are psychophysically discriminable in that they follow distinct psychophysical laws and show different dependencies on eccentricity and on luminance levels. But provided the elements are not too crowded (less than about two items per square degree in central vision, less in the periphery), there is little evidence that estimation of numerosity depends on mechanisms responsive to texture. The distinction is important, as the ability to discriminate numerosity, but not texture, correlates with formal maths skills.

Zimmermann, E., Morrone, M. C. & Burr, D. (2015). Visual mislocalization during saccade sequences,Exp Brain Res, 2 (233), 577-585. PDF

Visual objects briefly presented around the time of saccadic eye movements are perceived compressed towards the saccade target. Here, we investigated perisaccadic mislocalization with a double-step saccade paradigm, measuring localization of small probe dots briefly flashed at various times around the sequence of the two saccades. At onset of the first saccade, probe dots were mislocalized towards the first and, to a lesser extent, also towards the second saccade target. However, there was very little mislocalization at the onset of the second saccade. When we increased the presentation duration of the saccade targets prior to onset of the saccade sequence, perisaccadic mislocalization did occur at the onset of the second saccade.

2014 (back to top)

Cicchini, G. M., Anobile, G. & Burr, D. C. (2014). Compressive mapping of number to space reflects dynamic encoding mechanisms, not static logarithmic transform,Proc Natl Acad Sci U S A, 21 (111), 7867-7872. PDF

The mapping of number onto space is fundamental to measurement and mathematics. However, the mapping of young children, unschooled adults, and adults under attentional load shows strong compressive nonlinearities, thought to reflect intrinsic logarithmic encoding mechanisms, which are later "linearized" by education. Here we advance and test an alternative explanation: that the nonlinearity results from adaptive mechanisms incorporating the statistics of recent stimuli. This theory predicts that the response to the current trial should depend on the magnitude of the previous trial, whereas a static logarithmic nonlinearity predicts trialwise independence. We found a strong and highly significant relationship between numberline mapping of the current trial and the magnitude of the previous trial, in both adults and school children, with the current response influenced by up to 15% of the previous trial value. The dependency is sufficient to account for the shape of the numberline, without requiring logarithmic transform. We show that this dynamic strategy results in a reduction of reproduction error, and hence improvement in accuracy.

Anobile, G., Cicchini, G. M. & Burr, D. C. (2014). Separate mechanisms for perception of numerosity and density,Psychol Sci, 1 (25), 265-270. PDF

Despite the existence of much evidence for a number sense in humans, several researchers have questioned whether number is sensed directly or derived indirectly from texture density. Here, we provide clear evidence that numerosity and density judgments are subserved by distinct mechanisms with different psychophysical characteristics. We measured sensitivity for numerosity discrimination over a wide range of numerosities: For low densities (less than 0.25 dots/deg(2)), thresholds increased directly with numerosity, following Weber's law; for higher densities, thresholds increased with the square root of texture density, a steady decrease in the Weber fraction. The existence of two different psychophysical systems is inconsistent with a model in which number is derived indirectly from noisy estimates of density and area; rather, it points to the existence of separate mechanisms for estimating density and number. These results provide strong confirmation for the existence of neural mechanisms that sense number directly, rather than indirectly from texture density.

Sciutti, A., Burr, D., Saracco, A., Sandini, G. & Gori, M. (2014). Development of context-dependency in human space perception,Exp Brain Res, PDF

Perception is a complex process, where prior knowledge exerts a fundamental influence over what we see. The use of priors is at the basis of the well-known phenomenon of central tendency: Judgments of almost all quantities (such as length, duration, and number) tend to gravitate toward their mean magnitude. Although such context-dependency is universal in adult perceptual judgments, how it develops with age remains unknown. We asked children from 7 to 14 years of age and adults to reproduce lengths of stimuli drawn from different distributions and evaluated whether judgments were influenced by stimulus context. All participants reproduced the presented length differently depending on the context: The same stimulus was reproduced as shorter, when on average stimuli were short, and as longer, when on average stimuli were long. Interestingly, the relative importance given to the current sensory signal and to priors was almost constant during childhood. This strategy, which in adults is optimal in Bayesian terms, is apparently successful in holding the sensory noise at bay even during development. Hence, the influence of previous knowledge on perception is present already in young children, suggesting that context-dependency is established early in the developing brain.

Zimmermann, E., Morrone, M. C. & Burr, D. C. (2014). Buildup of spatial information over time and across eye-movements,Behavioural brain research, PDF

To interact rapidly and effectively with our environment, our brain needs access to a neural represen-tation of the spatial layout of the external world. However, the construction of such a map poses majorchallenges, as the images on our retinae depend on where the eyes are looking, and shift each time wemove our eyes, head and body to explore the world. Research from many laboratories including ourown suggests that the visual system does compute spatial maps that are anchored to real-world coordi-nates. However, the construction of these maps takes time (up to 500 ms) and also attentional resources.We discuss research investigating how retinotopic reference frames are transformed into spatiotopicreference-frames, and how this transformation takes time to complete. These results have implicationsfor theories about visual space coordinates and particularly for the current debate about the existence ofspatiotopic representations.

Gori, M., Sandini, G., Martinoli, C. & Burr, D. C. (2014). Impairment of auditory spatial localization in congenitally blind human subjects,Brain, Pt 1 (137), 288-293. PDF

Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind.

Aagten-Murphy, D., Cappagli, G. & Burr, D. (2014). Musical training generalises across modalities and reveals efficient and adaptive mechanisms for reproducing temporal intervals,Acta Psychol (Amst), (147), 25-33. PDF

Expert musicians are able to time their actions accurately and consistently during a musical performance. We investigated how musical expertise influences the ability to reproduce auditory intervals and how this generalises across different techniques and sensory modalities. We first compared various reproduction strategies and interval length, to examine the effects in general and to optimise experimental conditions for testing the effect of music, and found that the effects were robust and consistent across different paradigms. Focussing on a 'ready-set-go' paradigm subjects reproduced time intervals drawn from distributions varying in total length (176, 352 or 704 ms) or in the number of discrete intervals within the total length (3, 5, 11 or 21 discrete intervals). Overall, Musicians performed more veridical than Non-Musicians, and all subjects reproduced auditory-defined intervals more accurately than visually-defined intervals. However, Non-Musicians, particularly with visual stimuli, consistently exhibited a substantial and systematic regression towards the mean interval. When subjects judged intervals from distributions of longer total length they tended to regress more towards the mean, while the ability to discriminate between discrete intervals within the distribution had little influence on subject error. These results are consistent with a Bayesian model that minimizes reproduction errors by incorporating a central tendency prior weighted by the subject's own temporal precision relative to the current distribution of intervals. Finally a strong correlation was observed between all durations of formal musical training and total reproduction errors in both modalities (accounting for 30% of the variance). Taken together these results demonstrate that formal musical training improves temporal reproduction, and that this improvement transfers from audition to vision. They further demonstrate the flexibility of sensorimotor mechanisms in adapting to different task conditions to minimise temporal estimation errors.

Zimmermann, E., Morrone, M. C. & Burr, D. C. (2014). The visual component to saccadic compression,J Vis, 12 (14), PDF

Visual objects presented around the time of saccadic eye movements are strongly mislocalized towards the saccadic target, a phenomenon known as "saccadic compression." Here we show that perisaccadic compression is modulated by the presence of a visual saccadic target. When subjects saccaded to the center of the screen with no visible target, perisaccadic localization was more veridical than when tested with a target. Presenting a saccadic target sometime before saccade initiation was sufficient to induce mislocalization. When we systematically varied the onset of the saccade target, we found that it had to be presented around 100 ms before saccade execution to cause strong mislocalization: saccadic targets presented after this time caused progressively less mislocalization. When subjects made a saccade to screen center with a reference object placed at various positions, mislocalization was focused towards the position of the reference object. The results suggest that saccadic compression is a signature of a mechanism attempting to match objects seen before the saccade with those seen after.

Arrighi, R., Togoli, I. & Burr, D. C. (2014). A generalized sense of number, Proc R Soc B (2014) PDF

Much evidence has accumulated to suggest that many animals, including young human infants, possess an abstract sense of approximate quantity, a number sense. Most research has concentrated on apparent numerosity of spatial arrays of dots or other objects, but a truly abstract sense of number should be capable of encoding the numerosity of any set of discrete elements, however displayed and in whatever sensory modality. Here, we use the psychophysical technique ofadaptation to study the sense of number for serially presented items. We show that numerosity of both auditory and visual sequences is greatly affected by prior adaptation to slow or rapid sequences of events. The adaptation to visual stimuli was spatially selective (in external, not retinal coordinates), pointing to a sensory rather than cognitive process. However, adaptation generalized across modalities, from auditory to visual and vice versa. Adaptation also generalized across formats: adapting to sequential streams of flashes affected the perceived numerosity of spatial arrays. All these results point to a perceptual system that transcends vision and audition to encode an abstract sense of number in space and in time.

Burr, D. & Cicchini, G. M. (2014). Vision: efficient adaptive coding,Curr Biol, 22 (24), R1096-1098. PDF

Recent studies show that perception is driven not only by the stimuli currently impinging on our senses, but also by the immediate past history. The influence of recent perceptual history on the present reflects the action of efficient mechanisms that exploit temporal redundancies in natural scenes.

Vercillo, T., Burr, D., Sandini, G. & Gori, M. (2014). Children do not recalibrate motor-sensory temporal order after exposure to delayed sensory feedback,Dev Sci, PDF

Prolonged adaptation to delayed sensory feedback to a simple motor act (such as pressing a key) causes recalibration of sensory-motor synchronization, so instantaneous feedback appears to precede the motor act that caused it (Stetson, Cui, Montague & Eagleman, 2006). We investigated whether similar recalibration occurs in school-age children. Although plasticity may be expected to be even greater in children than in adults, we found no evidence of recalibration in children aged 8-11 years. Subjects adapted to delayed feedback for 100 trials, intermittently pressing a key that caused a tone to sound after a 200 ms delay. During the test phase, subjects responded to a visual cue by pressing a key, which triggered a tone to be played at variable intervals before or after the keypress. Subjects judged whether the tone preceded or followed the keypress, yielding psychometric functions estimating the delay when they perceived the tone to be synchronous with the action. The psychometric functions also gave an estimate of the precision of the temporal order judgment. In agreement with previous studies, adaptation caused a shift in perceived synchrony in adults, so the keypress appeared to trail behind the auditory feedback, implying sensory-motor recalibration. However, school children of 8 to 11 years showed no measureable adaptation of perceived simultaneity, even after adaptation with 500 ms lags. Importantly, precision in the simultaneity task also improved with age, and this developmental trend correlated strongly with the magnitude of recalibration. This suggests that lack of recalibration of sensory-motor simultaneity after adaptation in school-age children is related to their poor precision in temporal order judgments. To test this idea we measured recalibration in adult subjects with auditory noise added to the stimuli (which hampered temporal precision). Under these conditions, recalibration was greatly reduced, with the magnitude of recalibration strongly correlating with temporal precision.

2013 (back to top)

Cicchini, G. M., Binda, P., Burr, D. C. & Morrone, M. C. (2013). Transient spatiotopic integration across saccadic eye movements mediates visual stability,J Neurophysiol, 4 (109), 1117-1125. PDF

Eye movements pose major problems to the visual system, because each new saccade changes the mapping of external objects on the retina. It is known that stimuli briefly presented around the time of saccades are systematically mislocalized, whereas continuously visible objects are perceived as spatially stable even when they undergo large transsaccadic displacements. In this study we investigated the relationship between these two phenomena and measured how human subjects perceive the position of pairs of bars briefly displayed around the time of large horizontal saccades. We show that they interact strongly, with the perisaccadic bar being drawn toward the other, dramatically altering the pattern of perisaccadic mislocalization. The interaction field extends over a wide range (200 ms and 20 degrees ) and is oriented along the retinotopic trajectory of the saccade-induced motion, suggesting a mechanism that integrates pre- and postsaccadic stimuli at different retinal locations but similar external positions. We show how transient changes in spatial integration mechanisms, which are consistent with the present psychophysical results and with the properties of "remapping cells" reported in the literature, can create transient craniotopy by merging the distinct retinal images of the pre- and postsaccadic fixations to signal a single stable object.

Zimmermann, E., Morrone, M. C., Fink, G. R. & Burr, D. (2013). Spatiotopic neural representations develop slowly across saccades,Curr Biol, 5 (23), R193-194. PDF

One of the long-standing unsolved mysteries of visual neuroscience is how the world remains apparently stable in the face of continuous movements of eyes, head and body. Many factors seem to contribute to this stability, including rapid updating mechanisms that temporarily remap the visual input to compensate for the impending saccade [1]. However, there is also a growing body of evidence pointing to more long-lasting spatiotopic neural representations, which remain solid in external rather than retinal coordinates [2-6]. In this study, we show that these spatiotopic representations take hundreds of milliseconds to build up robustly.

Burr, D. Motion Perception: Human Psychophysics. In J. S. Werner & L. M. Chalupa (Eds.), The New Visual Neuroscience: MIT Press. PDF

Turi, M. & Burr, D. (2013). The "motion silencing" illusion results from global motion and crowding,J Vis, 5 (13), PDF

Suchow and Alvarez (2011) recently devised a striking illusion, where objects changing in color, luminance, size, or shape appear to stop changing when they move. They refer to the illusion as "motion silencing of awareness to visual change." Here we present evidence that the illusion results from two perceptual processes: global motion and crowding. We adapted Suchow and Alvarez's stimulus to three concentric rings of dots, a central ring of "target dots" flanked on either side by similarly moving flanker dots. Subjects had to identify in which of two presentations the target dots were continuously changing (sinusoidally) in size, as distinct from the other interval in which size was constant. The results show: (a) Motion silencing depends on target speed, with a threshold around 0.2 rotations per second (corresponding to about 10 degrees /s linear motion). (b) Silencing depends on both target-flanker spacing and eccentricity, with critical spacing about half eccentricity, consistent with Bouma's law. (c) The critical spacing was independent of stimulus size, again consistent with Bouma's law. (d) Critical spacing depended strongly on contrast polarity. All results imply that the "motion silencing" illusion may result from crowding.

Lunghi, C., Burr, D. C. & Morrone, M. C. (2013). Long-term effects of monocular deprivation revealed with binocular rivalry gratings modulated in luminance and in color,J Vis, 6 (13), PDF

During development, within a specific temporal window called the critical period, the mammalian visual cortex is highly plastic and literally shaped by visual experience; to what extent this extraordinary plasticity is retained in the adult brain is still a debated issue. We tested the residual plastic potential of the adult visual cortex for both achromatic and chromatic vision by measuring binocular rivalry in adult humans following 150 minutes of monocular patching. Paradoxically, monocular deprivation resulted in lengthening of the mean phase duration of both luminance-modulated and equiluminant stimuli for the deprived eye and complementary shortening of nondeprived phase durations, suggesting an initial homeostatic compensation for the lack of information following monocular deprivation. When equiluminant gratings were tested, the effect was measurable for at least 180 minutes after reexposure to binocular vision, compared with 90 minutes for achromatic gratings. Our results suggest that chromatic vision shows a high degree of plasticity, retaining the effect for a duration (180 minutes) longer than that of the deprivation period (150 minutes) and twice as long as that found with achromatic gratings. The results are in line with evidence showing a higher vulnerability of the P pathway to the effects of visual deprivation during development and a slower development of chromatic vision in humans.

Orchard-Mills, E., Leung, J., Burr, D., Morrone, M. C., Wufong, E., Carlile, S., et al. (2013). A mechanism for detecting coincidence of auditory and visual spatial signals,Multisens Res, 4 (26), 333-345. PDF

Information about the world is captured by our separate senses, and must be integrated to yield a unified representation. This raises the issue of which signals should be integrated and which should remain separate, as inappropriate integration will lead to misrepresentation and distortions. One strong cue suggesting that separate signals arise from a single source is coincidence, in space and in time. We measured increment thresholds for discriminating spatial intervals defined by pairs of simultaneously presented targets, one flash and one auditory sound, for various separations. We report a 'dipper function', in which thresholds follow a 'U-shaped' curve, with thresholds initially decreasing with spatial interval, and then increasing for larger separations. The presence of a dip in the audiovisual increment-discrimination function is evidence that the auditory and visual signals both input to a common mechanism encoding spatial separation, and a simple filter model with a sigmoidal transduction function simulated the results well. The function of an audiovisual spatial filter may be to detect coincidence, a fundamental cue guiding whether to integrate or segregate.

Burr, D., Rocca, E. D. & Morrone, M. C. (2013). Contextual effects in interval-duration judgements in vision, audition and touch,Exp Brain Res, PDF

We examined the effect of temporal context on discrimination of intervals marked by auditory, visual and tactile stimuli. Subjects were asked to compare the duration of the interval immediately preceded by an irrelevant "distractor" stimulus with an interval with no distractor. For short interval durations, the presence of the distractor affected greatly the apparent duration of the test stimulus: short distractors caused the test interval to appear shorter and vice versa. For very short reference durations (< o =100 ms), the contextual effects were large, changing perceived duration by up to a factor of two. The effect of distractors reduced steadily for longer reference durations, to zero effect for durations greater than 500 ms. We found similar results for intervals defined by visual flashes, auditory tones and brief finger vibrations, all falling to zero effect at 500 ms. Under appropriate conditions, there were strong cross-modal interactions, particularly from audition to vision. We also measured the Weber fractions for duration discrimination and showed that under the conditions of this experiment, Weber fractions decreased steadily with duration, following a square-root law, similarly for all three modalities. The magnitude of the effect of the distractors on apparent duration correlated well with Weber fraction, showing that when duration discrimination was relatively more precise, the context dependency was less. The results were well fit by a simple Bayesian model combining noisy estimates of duration with the action of a resonance-like mechanism that tended to regularize the sound sequence intervals.

Burr, D., Della Rocca, E. & Morrone, M. C. (2013). Erratum to: Contextual effects in interval-duration judgements in vision, audition and touch,Exp Brain Res, PDF

Anobile, G., Stievano, P. & Burr, D. C. (2013). Visual sustained attention and numerosity sensitivity correlate with math achievement in children,J Exp Child Psychol, 2 (116), 380-391. PDF

In this study, we investigated in school-age children the relationship among mathematical performance, the perception of numerosity (discrimination and mapping to number line), and sustained visual attention. The results (on 68 children between 8 and 11years of age) show that attention and numerosity perception predict math scores but not reading performance. Even after controlling for several variables, including age, gender, nonverbal IQ, and reading accuracy, attention remained correlated with math skills and numerosity discrimination. These findings support previous reports showing the interrelationship between visual attention and both numerosity perception and math performance. It also suggests that attentional deficits may be implicated in disturbances such as developmental dyscalculia.

Poletti, M., Burr, DC. & Rucci, M. (2013).Optimal Multimodal Integration in Spatial Localization,J Neurosci 33(35):14259-14268. PDF

Saccadic eye movements facilitate rapid and efficient exploration of visual scenes, but also pose serious challenges to establishing reliable spatial representations. This process presumably depends on extraretinal information about eye position, but it is still unclear whether afferent or efferent signals are implicated and how these signals are combined with the visual input. Using a novel gaze-contingent search paradigm with highly controlled retinal stimulation, we examined the performance of human observers in locating a previously fixated target after a variable number of saccades, a task that generates contrasting predictions for different updating mechanisms.Weshow that while localization accuracy is unaffected by saccades, localization precision deteriorates nonlinearly, revealing a statistically optimal combination of retinal and extraretinal signals. These results provide direct evidence for optimal multimodal integration in the updating of spatial representations and elucidate the contributions of corollary discharge signals and eye proprioception.

Zimmerman, E., Morrone,M.C. & Burr, DC.. (2013).Spatial position information accumulates steadily over time, J Neurosci 33(47):18396-18401. PDF

One of the more enduring mysteries of neuroscience is how the visual system constructs robust maps of the world that remain stable in the face of frequent eye-movements. Here we show that encoding the position of objects in external space is a relatively slow process, building up over hundreds of milliseconds. We display targets to which human subjects saccade after a variable preview duration. As they saccade, the target is displaced leftwards or rightwards, and subjects report the displacement direction. When subjects saccade to targets without delay, sensitivity is poor: but if the target is viewed for 300-500 ms before saccading, sensitivity is similar to that during fixation with a strong visual mask to dampen transients. These results suggest that the poor displacement thresholds usually observed in the “saccadic suppression of displacement” paradigm do not reflect the action of special mechanisms conferring saccadic stability, but the fact that the target has had insufficient time to be encoded in memory. Under more natural conditions, trans-saccadic displacement detection is as good as in fixation, when the displacement transients are masked.

Pooresmaeili, A., Cicchini, G.M., Morrone, M. C. & Burr, D.C. (2013). Spatiotemporal filtering and motion illusions, Journal of Vision, (13)10-21. PDF

Our group has long championed the idea that perceptual processing of information can be anchored in a dynamic coordinate system that need not correspond to the instantaneous retinal representation that need not correspond to the istantaneus retinal representation...

2012 (back to top)

Anobile, G., Cicchini, G. M. & Burr, D. C. (2012). Linear mapping of numbers onto space requires attention,Cognition, 3 (122), 454-459. PDF

Mapping of number onto space is fundamental to mathematics and measurement. Previous research suggests that while typical adults with mathematical schooling map numbers veridically onto a linear scale, pre-school children and adults without formal mathematics training, as well as individuals with dyscalculia, show strong compressive, logarithmic-like non-linearities when mapping both symbolic and non-symbolic numbers onto the numberline. Here we show that the use of the linear scale is dependent on attentional resources. We asked typical adults to position clouds of dots on a numberline of various lengths. In agreement with previous research, they did so veridically under normal conditions, but when asked to perform a concurrent attentionally-demanding conjunction task, the mapping followed a compressive, non-linear function. We model the non-linearity both by the commonly assumed logarithmic transform, and also with a Bayesian model of central tendency. These results suggest that veridical representation numerosity requires attentional mechanisms.

Cicchini, G. M., Arrighi, R., Cecchetti, L., Giusti M. & Burr, D. C. (2012). Optimal Encoding of Interval Timing in Expert Percussionists, J Neurosci, 3 (32), 1056-1060. PDF

We measured temporal reproduction in human subjects with various levels of musical expertise: expert drummers, string musicians, and non-musicians. While duration reproduction of the non-percussionists showed a characteristic central tendency or regression to the mean, drummers responded veridically. Furthermore, when the stimuli were auditory tones rather than flashes, all subjects responded veridically. The behavior of all three groups in both modalities is well explained by a Bayesian model that seeks to minimize reproduction errors by incorporating a central tendency prior, a probability density function centered at the mean duration of the sample. We measured separately temporal precision thresholds with a bisection task; thresholds were twice as low in drummers as in the other two groups. These estimates of temporal precision, together with an adaptable Bayesian prior, predict well the reproduction results and the central tendency strategy under all conditions and for all subject groups. These results highlight the efficiency and flexibility of sensorimotor mechanisms estimating temporal duration.

Pooresmaeili, A., Cicchini, G. M., Morrone, M. C. & Burr, D. (2012). "Non-retinotopic processing" in Ternus motion displays modeled by spatiotemporal filters,J Vis, 1 (12), PDF

Recently, M. Boi, H. Ogmen, J. Krummenacher, T. U. Otto, & M. H. Herzog (2009) reported a fascinating visual effect, where the direction of apparent motion was disambiguated by cues along the path of apparent motion, the Ternus-Pikler group motion, even though no actual movement occurs in this stimulus. They referred to their study as a "litmus test" to distinguish "non-retinotopic" (motion-based) from "retinotopic" (retina-based) image processing. We adapted the test to one with simple grating stimuli that could be more readily modeled and replicated their psychophysical results quantitatively with this stimulus. We then modeled our experiments in 3D (x, y, t) Fourier space and demonstrated that the observed perceptual effects are readily accounted for by integration of information within a detector that is oriented in space and time, in a similar way to previous explanations of other motion illusions. This demonstration brings the study of Boi et al. into the more general context of perception of moving objects.

Zimmermann, E., Morrone, M. C. & Burr, D. (2012). Visual motion distorts visual and motor space, J Vis, 2 (12), PDF

Mapping of number onto space is fundamental to mathematics and measurement. Previous research suggests that while typical adults with mathematical schooling map numbers veridically onto a linear scale, pre-school children and adults without formal mathematics training, as well as individuals with dyscalculia, show strong compressive, logarithmic-like non-linearities when mapping both symbolic and non-symbolic numbers onto the numberline. Here we show that the use of the linear scale is dependent on attentional resources. We asked typical adults to position clouds of dots on a numberline of various lengths. In agreement with previous research, they did so veridically under normal conditions, but when asked to perform a concurrent attentionally-demanding conjunction task, the mapping followed a compressive, non-linear function. We model the non-linearity both by the commonly assumed logarithmic transform, and also with a Bayesian model of central tendency. These results suggest that veridical representation numerosity requires attentional mechanisms.

Panichi, M., Burr, D., Morrone, M. C. & Baldassi, S. (2012). Spatiotemporal dynamics of perisaccadic remapping in humans revealed by classification images,J Vis, 4 (12), 11. PDF

We actively scan our environment with fast ballistic movements called saccades, which create large and rapid displacements of the image on the retina. At the time of saccades, vision becomes transiently distorted in many ways: Briefly flashed stimuli are displaced in space and in time, and spatial and temporal intervals appear compressed. Here we apply the psychophysical technique of classification images to study the spatiotemporal dynamics of visual mechanisms during saccades. We show that saccades cause gross distortions of the classification images. Before the onset of saccadic eye movements, the positive lobes of the images become enlarged in both space and in time and also shifted in a systematic manner toward the pre-saccadic fixation (in space) and anticipated in time by about 50 ms. The transient reorganization creates a spatiotemporal organization oriented in the direction of saccadic-induced motion at the time of saccades, providing a potential mechanism for integrating stimuli across saccades, facilitating stable and continuous vision in the face of constant eye movements.

Turi, M. & Burr, D. (2012). Spatiotopic perceptual maps in humans: evidence from motion adaptation,Proc Biol Sci, 1740 (279), 3091-3097. PDF

How our perceptual experience of the world remains stable and continuous despite the frequent repositioning eye movements remains very much a mystery. One possibility is that our brain actively constructs a spatiotopic representation of the world, which is anchored in external-or at least head-centred-coordinates. In this study, we show that the positional motion aftereffect (the change in apparent position after adaptation to motion) is spatially selective in external rather than retinal coordinates, whereas the classic motion aftereffect (the illusion of motion after prolonged inspection of a moving source) is selective in retinotopic coordinates. The results provide clear evidence for a spatiotopic map in humans: one which can be influenced by image motion.

Anobile, G., Turi, M., Cicchini, G. M. & Burr, D. C. (2012). The effects of cross-sensory attentional demand on subitizing and on mapping number onto space,Vision Res, PDF

Various aspects of numerosity judgments, especially subitizing and the mapping of number onto space, depend strongly on attentional resources. Here we use a dual-task paradigm to investigate the effects of cross-sensory attentional demands on visual subitizing and spatial mapping. The results show that subitizing is strongly dependent on attentional resources, far more so than is estimation of higher numerosities. But unlike many other sensory tasks, visual subitizing is equally affected by concurrent attentionally demanding auditory and tactile tasks as it is by visual tasks, suggesting that subitizing may be amodal. Mapping number onto space was also strongly affected by attention, but only when the dual-task was in the visual modality. The non-linearities in numberline mapping under attentional load are well explained by a Bayesian model of central tendency.

Gori, M., Tinelli, F., Sandini, G., Cioni, G. & Burr, D. (2012). Impaired visual size-discrimination in children with movement disorders,Neuropsychologia, 8 (50), 1838-1843. PDF

Multisensory integration of spatial information occurs late in childhood, at around eight years (Gori, Del Viva, Sandini, & Burr, 2008). For younger children, the haptic system dominates size discrimination and vision dominates orientation discrimination: the dominance may reflect sensory calibration, and could have direct consequences on children born with specific sensory disabilities. Here we measure thresholds for visual discrimination of orientation and size in children with movement disorders of upper limbs. Visual orientation discrimination was very similar to the age-matched typical children, but visual size discrimination thresholds were far worse, in all eight individuals with early-onset movement disorder. This surprising and counterintuitive result is readily explained by the cross-sensory calibration hypothesis: when the haptic sense is unavailable for manipulation, it cannot be readily used to estimate size, and hence to calibrate the visual experience of size: visual discrimination is subsequently impaired. This complements a previous study showing that non-sighted children have reduced acuity for haptic orientation, but not haptic size, discriminations (Gori, Sandini, Martinoli, & Burr, 2010). Together these studies show that when either vision or haptic manipulation is impaired, the impairment also impacts on complementary sensory systems that are calibrated by that one.


Pellicano, E. & Burr, D. (2012). When the world becomes 'too real': a Bayesian explanation of autistic perception,Trends Cogn Sci, 10 (16), 504-510. PDF

Perceptual experience is influenced both by incoming sensory information and prior knowledge about the world, a concept recently formalised within Bayesian decision theory. We propose that Bayesian models can be applied to autism - a neurodevelopmental condition with atypicalities in sensation and perception - to pinpoint fundamental differences in perceptual mechanisms. We suggest specifically that attenuated Bayesian priors - 'hypo-priors' - may be responsible for the unique perceptual experience of autistic people, leading to a tendency to perceive the world more accurately rather than modulated by prior experience. In this account, we consider how hypo-priors might explain key features of autism - the broad range of sensory and other non-social atypicalities--in addition to the phenomenological differences in autistic perception.

Burr, D. C. & Morrone, M. C. (2012). Constructing stable spatial maps of the world,Perception, 11 (41), 1355-1372. PDF

To interact rapidly and effectively with our environment, our brain needs access to a neural representation—or map—of the spatial layout of the external world. However, the construction of such a map poses major challenges to the visual system, given that the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head, and body to explore the world. Much research has been devoted to how the stability is achieved, with the debate often polarized between the utility of spatiotopic maps (that remain solid in external coordinates), as opposed to transiently updated retinotopic maps. Our research suggests that the visual system uses both strategies to maintain stability. f MRI, motion-adaptation, and saccade-adaptation studies demonstrate and characterize spatiotopic neural maps within the dorsal visual stream that remain solid in external rather than retinal coordinates. However, the construction of these maps takes time (up to 500 ms) and attentional resources. To solve the immediate problems created by individual saccades, we postulate the existence of a separate system to bridge each saccade with neural units that are ‘transiently craniotopic’. These units prepare for the effects of saccades with a shift of their receptive fields before the saccade starts, then relaxing back into their standard position during the saccade, compensating for its action. Psychophysical studies investigating localization of stimuli flashed briefly around the time of saccades provide strong support for these neural mechanisms, and show quantitatively how they integrate information across saccades. This transient system cooperates with the spatiotopic mechanism to provide a useful map to guide interactions with our environment: one rapid and transitory, bringing into play the high-resolution visual areas; the other slow, long-lasting, and low-resolution, useful for interacting with the world.

Include here the text of the publication

2011 (back to top)

Burr, D. C., Cicchini, G. M., Arrighi, R. & Morrone, M. C. (2011). Spatiotopic selectivity of adaptation-based compression of event duration, J Vis, 2 (11), 21; author reply 21a. PDF

A. Bruno, I. Ayhan, and A. Johnston (2010) have recently challenged our report of spatiotopic selectivity for adaptation of event time (D. Burr, A. Tozzi, & M. C. Morrone, 2007) and also our claim that retinotopic adaptation of event time depends on perceived speed. To assist the reader judge this issue, we present here a mass of data accumulated in our laboratories over the last few years, all confirming our original conclusions. We also point out that where Bruno et al. made experimental measurements (rather than relying on theoretical reasoning), they too find clearly significant spatiotopically tuned adaptation-based compression of event time but of lower magnitude to ours. We speculate on the reasons for the differences in magnitude.

Lunghi C, Burr DC, Morrone C. (2011). Brief periods of monocular deprivation disrupt ocular balance in human adult visual cortex, Curr Biol. 2011 Jul 26;21(14):R538-9. PDF

Neuroplasticity is a fundamental property of the developing mammalian visual system, with residual potential in adult human cortex [1]. A short period of abnormal visual experience (such as occlusion of one eye) before closure of the critical period has dramatic and permanent neural consequences, reshaping visual cortical organization in favour of the non-deprived eye [2,3]. We used binocular rivalry [4] - a sensitive probe of neural competition - to demonstrate that adult human visual cortex retains a surprisingly high degree of neural plasticity, with important perceptual consequences. We report that 150 minutes of monocular deprivation strongly affects the dynamics of binocular rivalry, unexpectedly causing the deprived eye to prevail in conscious perception twice as much as the non-deprived eye, with significant effects for up to 90 minutes. Apparent contrast of stimuli presented to the deprived eye was also increased, suggesting that the deprivation acts by up-regulation of cortical gain-control mechanisms of the deprived eye. The results suggest that adult visual cortex retains a good deal of plasticity that could be important in reaction to sensory loss.

Burr, D. (2011). Visual perception: more than meets the eye,Curr Biol, 4 (21), R159-161. PDF

A recent study shows that objects changing in colour, luminance, size or shape appear to stop changing when they move. These and other compelling illusions provide tantalizing clues about the mechanisms and limitations of object analysis.

Burr, D. & Thompson, P. (2011). Motion psychophysics: 1985-2010,Vision Res, PDF

This review traces progress made in the field of visual motion research from 1985 through to 2010. While it is certainly not exhaustive, it attempts to cover most of the major achievements during that period, and speculate on where the field is heading.

Burr, D. C. & Morrone, M. C. (2011). Spatiotopic coding and remapping in humans,Philos Trans R Soc Lond B Biol Sci, 1564 (366), 504-515. PDF

How our perceptual experience of the world remains stable and continuous in the face of continuous rapid eye movements still remains a mystery. This review discusses some recent progress towards understanding the neural and psychophysical processes that accompany these eye movements. We firstly report recent evidence from imaging studies in humans showing that many brain regions are tuned in spatiotopic coordinates, but only for items that are actively attended. We then describe a series of experiments measuring the spatial and temporal phenomena that occur around the time of saccades, and discuss how these could be related to visual stability. Finally, we introduce the concept of the spatio-temporal receptive field to describe the local spatiotopicity exhibited by many neurons when the eyes move.

Crespi, S., Biagi, L., d'Avossa, G., Burr, D. C., Tosetti, M. & Morrone, M. C. (2011). Spatiotopic Coding of BOLD Signal in Human Visual Cortex Depends on Spatial Attention,PLoS One, 7 (6), e21661. PDF

The neural substrate of the phenomenological experience of a stable visual world remains obscure. One possible mechanism would be to construct spatiotopic neural maps where the response is selective to the position of the stimulus in external space, rather than to retinal eccentricities, but evidence for these maps has been inconsistent. Here we show, with fMRI, that when human subjects perform concomitantly a demanding attentive task on stimuli displayed at the fovea, BOLD responses evoked by moving stimuli irrelevant to the task were mostly tuned in retinotopic coordinates. However, under more unconstrained conditions, where subjects could attend easily to the motion stimuli, BOLD responses were tuned not in retinal but in external coordinates (spatiotopic selectivity) in many visual areas, including MT, MST, LO and V6, agreeing with our previous fMRI study. These results indicate that spatial attention may play an important role in mediating spatiotopic selectivity.

Gori, M., Mazzilli, G., Sandini, G. & Burr, D. (2011). Cross-Sensory Facilitation Reveals Neural Interactions between Visual and Tactile Motion in Humans,Front Psychol, (2), 55. PDF

Many recent studies show that the human brain integrates information across the different senses and that stimuli of one sensory modality can enhance the perception of other modalities. Here we study the processes that mediate cross-modal facilitation and summation between visual and tactile motion. We find that while summation produced a generic, non-specific improvement of thresholds, probably reflecting higher-order interaction of decision signals, facilitation reveals a strong, direction-specific interaction, which we believe reflects sensory interactions. We measured visual and tactile velocity discrimination thresholds over a wide range of base velocities and conditions. Thresholds for both visual and tactile stimuli showed the characteristic "dipper function," with the minimum thresholds occurring at a given "pedestal speed." When visual and tactile coherent stimuli were combined (summation condition) the thresholds for these multisensory stimuli also showed a "dipper function" with the minimum thresholds occurring in a similar range to that for unisensory signals. However, the improvement of multisensory thresholds was weak and not directionally specific, well predicted by the maximum-likelihood estimation model (agreeing with previous research). A different technique (facilitation) did, however, reveal direction-specific enhancement. Adding a non-informative "pedestal" motion stimulus in one sensory modality (vision or touch) selectively lowered thresholds in the other, by the same amount as pedestals in the same modality. Facilitation did not occur for neutral stimuli like sounds (that would also have reduced temporal uncertainty), nor for motion in opposite direction, even in blocked trials where the subjects knew that the motion was in the opposite direction showing that the facilitation was not under subject control. Cross-sensory facilitation is strong evidence for functionally relevant cross-sensory integration at early levels of sensory processing.

Binda, P., Morrone, M. C., Ross, J. & Burr, D. C. (2011). Underestimation of perceived number at the time of saccades,Vision Res, 1 (51), 34-42. PDF

Saccadic eye movements produce transient distortions in both space and time. Mounting evidence suggests that space and time perception are linked, and associated with the perception of another important perceptual attribute, numerosity. Here we investigate the effect of saccades on the perceived numerosity of briefly presented arrays of visual elements. We report a systematic underestimation of numerosity for stimuli flashed just before or during saccades, of about 35% of the reference numerosity. The bias is observed only for relatively large arrays of visual elements, in line with the notion that a distinct perceptual mechanism is involved with enumeration of small numerosities in the 'subitizing' range. This study provides further evidence for the notion that space, time and number share common neural representations, all affected by saccades.

Arrighi, R., Lunardi, R. & Burr, D. (2011). Vision and audition do not share attentional resources in sustained tasks,Front Psychol, (2), 56. PDF

Our perceptual capacities are limited by attentional resources. One important question is whether these resources are allocated separately to each sense or shared between them. We addressed this issue by asking subjects to perform a double task, either in the same modality or in different modalities (vision and audition). The primary task was a multiple object-tracking task (Pylyshyn and Storm, 1988), in which observers were required to track between 2 and 5 dots for 4 s. Concurrently, they were required to identify either which out of three gratings spaced over the interval differed in contrast or, in the auditory version of the same task, which tone differed in frequency relative to the two reference tones. The results show that while the concurrent visual contrast discrimination reduced tracking ability by about 0.7 d', the concurrent auditory task had virtually no effect. This confirms previous reports that vision and audition use separate attentional resources, consistent with fMRI findings of attentional effects as early as V1 and A1. The results have clear implications for effective design of instrumentation and forms of audio-visual communication devices.

Burr, D. C., Anobile, G. & Turi, M. (2011). Adaptation Affects Both High and Low (Subitized) Numbers Under Conditions of High Attentional Load,Seeing and Perceiving, (24), 141-150. PDF

It has recently been reported that, like most sensory systems, numerosity is subject to adaptation. However, the effect seemed to be limited to numerosity estimation outside the subitizing range. In this study we show that low numbers, clearly in the subitizing range, are adaptable under conditions of high attentional load. These results support the idea that numerosity is detected by a perceptual mechanism that operates over the entire range of numbers, supplemented by an attention-based system for small numbers (subitizing).

Tomassini A, Gori M, Burr D, Sandini G and Morrone MC (2011) Perceived duration of visual and tactile stimuli depends on perceived speed. Front. Integr. Neurosci. 5:51 PDF

It is known that the perceived duration of visual stimuli is strongly influenced by speed: faster moving stimuli appear to last longer. To test whether this is a general property of sensory systems we asked participants to reproduce the duration of visual and tactile gratings, and visuo-tactile gratings moving at a variable speed (3.5–15 cm/s) for three different durations (400, 600, and 800 ms). For both modalities, the apparent duration of the stimulus increased strongly with stimulus speed, more so for tactile than for visual stimuli. In addition, visual stimuli were perceived to last approximately 200 ms longer than tactile stimuli. The apparent duration of visuo-tactile stimuli lay between the unimodal estimates, as the Bayesian account predicts, but the bimodal precision of the reproduction did not show the theoretical improvement. A cross-modal speed-matching task revealed that visual stimuli were perceived to move faster than tactile stimuli. To test whether the large difference in the perceived duration of visual and tactile stimuli resulted from the difference in their perceived speed, we repeated the time reproduction task with visual and tactile stimuli matched in apparent speed. This reduced, but did not completely eliminate the difference in apparent duration. These results show that for both vision and touch, perceived duration depends on speed, pointing to common strategies of time perception.

Zimmerman, E., Burr D.C., and Morrone, M.C. (2011) Spatiotopic Visual Maps Revealed by Saccadic Adaptation in Humans, Curr Biol. 2011 Aug 23;21(16):1380-4 PDF

Saccadic adaptation is a powerful experimental paradigm to probe the mechanisms of eye movement control and spatial vision, in which saccadic amplitudes change in response to false visual feedback. The adaptation occurs primarily in the motor system, but there is also evidence for visual adaptation, depending on the size and the permanence of the postsaccadic error. Here we confirm that adaptation has a strong visual component and show that the visual component of the adaptation is spatially selective in external, not retinal coordinates. Subjects performed a memory-guided, double-saccade, outward-adaptation task designed to maximize visual adaptation and to dissociate the visual and motor corrections. When the memorized saccadic target was in the same position (in external space) as that used in the adaptation training, saccade targeting was strongly influenced by adaptation (even if not matched in retinal or cranial position), but when in the same retinal or cranial but different external spatial position, targeting was unaffected by adaptation, demonstrating unequivocal spatiotopic selectivity. These results point to the existence of a spatiotopic neural representation for eye movement control that adapts in response to saccade error signals.

Arrighi, R., Cartocci, G. & Burr, D. (2011). Reduced perceptual sensitivity for biological motion in paraplegia patients,Curr Biol, 22 (21), R910-911. PDF

Physiological and psychophysical studies suggest that the perception and execution of movement may be linked. Here we ask whether severe impairment of locomotion could impact on the capacity to perceive human locomotion. We measured sensitivity for the perception of point-light walkers – animation sequences of human biological motion portrayed by only the joints – in patients with severe spinal injury. These patients showed a huge (nearly three-fold) reduction of sensitivity for detecting and for discriminating the direction of biological motion compared with healthy controls, and also a smaller (~40%) reduction in sensitivity to simple translational motion. However, there was no statistically significant reduction in contrast sensitivity for discriminating the orientation of static gratings. The results point to an interaction between perceiving and producing motion, implicating shared algorithms and neural mechanisms.


2010 (back to top)

Morrone, M. C., Cicchini, M. & Burr, D. C. (2010). Spatial maps for time and motion,Exp Brain Res, 2 (206), 121-128. PDF

In this article, we review recent research studying the mechanisms for transforming coordinate systems to encode space, time and motion. A range of studies using functional imaging and psychophysical techniques reveals mechanisms in the human brain for encoding information in external rather than retinal coordinates. This reinforces the idea of a tight relationship between space and time, in the parietal cortex of primates.

Burr, D. C., Ross, J., Binda, P. & Morrone, M. C. (2010). Saccades compress space, time and number,Trends Cogn Sci, 12 (14), 528-533. PDF

It has been suggested that space, time and number are represented on a common subjective scale. Saccadic eye movements provide a fascinating test. Saccades compress the perceived magnitude of spatial separations and temporal intervals to approximately half of their true value. The question arises as to whether saccades also compress number. They do, and compression follows a very similar time course for all three attributes: it is maximal at saccadic onset and decreases to veridicality within a window of approximately 50ms. These results reinforce the suggestion of a common perceptual metric, which is probably mediated by the intraparietal cortex; they further suggest that before each saccade the common metric for all three is reset, possibly to pave the way for a fresh analysis of the post-saccadic situation.

Burr, D. C., Turi, M. & Anobile, G. (2010). Subitizing but not estimation of numerosity requires attentional resources,J Vis, 6 (10), 20. PDF

The numerosity of small numbers of objects, up to about four, can be rapidly appraised without error, a phenomenon known as subitizing. Larger numbers can either be counted, accurately but slowly, or estimated, rapidly but with errors. There has been some debate as to whether subitizing uses the same or different mechanisms than those of higher numerical ranges and whether it requires attentional resources. We measure subjects' accuracy and precision in making rapid judgments of numerosity for target numbers spanning the subitizing and estimation ranges while manipulating the attentional load, both with a spatial dual task and the "attentional blink" dual-task paradigm. The results of both attentional manipulations were similar. In the high-load attentional condition, Weber fractions were similar in the subitizing (2-4) and estimation (5-7) ranges (10-15%). In the low-load and single-task condition, Weber fractions substantially improved in the subitizing range, becoming nearly error-free, while the estimation range was relatively unaffected. The results show that the mechanisms operating over the subitizing and estimation ranges are not identical. We suggest that pre-attentive estimation mechanisms works at all ranges, but in the subitizing range, attentive mechanisms also come into play.

Burr, D. C. & Morrone, M. C. (2010). Vision: keeping the world still when the eyes move,Curr Biol, 10 (20), R442-444. PDF

A long-standing problem for visual science is how the world remains so apparently stable in the face of continual rapid eye movements. New experimental evidence, and computational models are helping to solve this mystery.

Ross, J. & Burr, D. C. (2010). Vision senses number directly,J Vis, 2 (10), 10 11-18. PDF

We have recently suggested that numerosity is a primary sensory attribute, and shown that it is strongly susceptible to adaptation. Here we use the Method of Single Stimuli to show that observers can extract a running average of numerosity of a succession of stimuli to use as a standard of comparison for subsequent stimuli. On separate sessions observers judged whether the perceived numerosity or density of a particular trial was greater or less than the average of previous stimuli. Thresholds were as precise for this task as for explicit comparisons of test with standard stimuli. Importantly, we found no evidence that numerosity judgments are mediated by density. Under all conditions, judgements of numerosity were as precise as those of density. Thresholds in intermingled conditions, where numerosity varied unpredictably with density, were as precise as the blocked thresholds. Judgments in constant-density conditions were more precise thresholds than those in variable-density conditions, and numerosity judgements in conditions of constant-numerosity showed no tendency to follow density. We further report the novel finding that perceived numerosity increases with decreasing luminance, whereas texture density does not, further evidence for independent processing of the two attributes. All these measurements suggest that numerosity judgments can be, and are, made independently of judgments of the density of texture.

Gori, M., Sandini, G., Martinoli, C. & Burr, D. (2010). Poor haptic orientation discrimination in nonsighted children may reflect disruption of cross-sensory calibration,Curr Biol, 3 (20), 223-225. PDF

A long-standing question, going back at least 300 years to Berkeley's famous essay, is how sensory systems become calibrated with physical reality. We recently showed [1] that children younger than 8-10 years do not integrate visual and haptic information optimally, but that one or the other sense prevails: touch for size and vision for orientation discrimination. The sensory dominance may reflect crossmodal calibration of vision and touch, where the more accurate sense calibrates the other. This hypothesis leads to a clear prediction: that lack of clear vision at an early age should affect calibration of haptic orientation discrimination. We therefore measured size and orientation haptic discrimination thresholds in 17 congenitally visually impaired children (aged 5-19). Haptic orientation thresholds were greatly impaired compared with age-matched controls, whereas haptic size thresholds were at least as good, and often better. One child with a late-acquired visual impairment stood out with excellent orientation discrimination. The results provide strong support for our crossmodal calibration hypothesis.

Binda, P., Morrone, M. C. & Burr, D. C. (2010). Temporal auditory capture does not affect the time course of saccadic mislocalization of visual stimuli,J Vis, 2 (10), 7 1-13. PDF

Irrelevant sounds can "capture" visual stimuli to change their apparent timing, a phenomenon sometimes termed "temporal ventriloquism". Here we ask whether this auditory capture can alter the time course of spatial mislocalization of visual stimuli during saccades. We first show that during saccades, sounds affect the apparent timing of visual flashes, even more strongly than during fixation. However, this capture does not affect the dynamics of perisaccadic visual distortions. Sounds presented 50 ms before or after a visual bar (that change perceived timing of the bars by more than 40 ms) had no measurable effect on the time courses of spatial mislocalization of the bars, in four subjects. Control studies showed that with barely visible, low-contrast stimuli, leading, but not trailing, sounds can have a small effect on mislocalization, most likely attributable to attentional effects rather than auditory capture. These findings support previous studies showing that integration of multisensory information occurs at a relatively late stage of sensory processing, after visual representations have undergone the distortions induced by saccades.

2009 (back to top)

Burr, D., Silva, O., Cicchini, G. M., Banks, M. S. & Morrone, M. C. (2009). Temporal mechanisms of multimodal binding,Proc Biol Sci, 1663 (276), 1761-1769. PDF

The simultaneity of signals from different senses-such as vision and audition-is a useful cue for determining whether those signals arose from one environmental source or from more than one. To understand better the sensory mechanisms for assessing simultaneity, we measured the discrimination thresholds for time intervals marked by auditory, visual or auditory-visual stimuli, as a function of the base interval. For all conditions, both unimodal and cross-modal, the thresholds followed a characteristic 'dipper function' in which the lowest thresholds occurred when discriminating against a non-zero interval. The base interval yielding the lowest threshold was roughly equal to the threshold for discriminating asynchronous from synchronous presentations. Those lowest thresholds occurred at approximately 5, 15 and 75 ms for auditory, visual and auditory-visual stimuli, respectively. Thus, the mechanisms mediating performance with cross-modal stimuli are considerably slower than the mechanisms mediating performance within a particular sense. We developed a simple model with temporal filters of different time constants and showed that the model produces discrimination functions similar to the ones we observed in humans. Both for processing within a single sense, and for processing across senses, temporal perception is affected by the properties of temporal filters, the outputs of which are used to estimate time offsets, correlations between signals, and more.

Binda, P., Cicchini, G. M., Burr, D. C. & Morrone, M. C. (2009). Spatiotemporal distortions of visual perception at the time of saccades,J Neurosci, 42 (29), 13147-13157. PDF

Both space and time are grossly distorted during saccades. Here we show that the two distortions are strongly linked, and that both could be a consequence of the transient remapping mechanisms that affect visual neurons perisaccadically. We measured perisaccadic spatial and temporal distortions simultaneously by asking subjects to report both the perceived spatial location of a perisaccadic vertical bar (relative to a remembered ruler), and its perceived timing (relative to two sounds straddling the bar). During fixation and well before or after saccades, bars were localized veridically in space and in time. In different epochs of the perisaccadic interval, temporal perception was subject to different biases. At about the time of the saccadic onset, bars were temporally mislocalized 50-100 ms later than their actual presentation and spatially mislocalized toward the saccadic target. Importantly, the magnitude of the temporal distortions co-varied with the spatial localization bias and the two phenomena had similar dynamics. Within a brief period about 50 ms before saccadic onset, stimuli were perceived with shorter latencies than at other delays relative to saccadic onset, suggesting that the perceived passage of time transiently inverted its direction. Based on this result we could predict the inversion of perceived temporal order for two briefly flashed visual stimuli. We developed a model that simulates the perisaccadic transient change of neuronal receptive fields predicting well the reported temporal distortions. The key aspects of the model are the dynamics of the "remapped" activity and the use of decoder operators that are optimal during fixation, but are not updated perisaccadically.

Burr, D. C., Baldassi, S., Morrone, M. C. & Verghese, P. (2009). Pooling and segmenting motion signals,Vision Res, 10 (49), 1065-1072. PDF

Humans are extremely sensitive to visual motion, largely because local motion signals can be integrated over a large spatial region. On the other hand, summation is often not advantageous, for example when segmenting a moving stimulus against a stationary or oppositely moving background. In this study we show that the spatial extent of motion integration is not compulsory, but is subject to voluntary attentional control. Measurements of motion coherence sensitivity with summation and search paradigms showed that human observers can combine motion signals from cued regions or patches in an optimal manner, even when the regions are quite distinct and remote from each other. Further measurements of contrast sensitivity reinforce previous studies showing that motion integration is preceded by a local analysis akin to contrast thresholding (or intrinsic uncertainty). The results were well modelled by two standard signal-detection-theory models.

Burr, D., Banks, M. S. & Morrone, M. C. (2009). Auditory dominance over vision in the perception of interval duration,Exp Brain Res, 1 (198), 49-57. PDF

The "ventriloquist effect" refers to the fact that vision usually dominates hearing in spatial localization, and this has been shown to be consistent with optimal integration of visual and auditory signals (Alais and Burr in Curr Biol 14(3):257-262, 2004). For temporal localization, however, auditory stimuli often "capture" visual stimuli, in what has become known as "temporal ventriloquism". We examined this quantitatively using a bisection task, confirming that sound does tend to dominate the perceived timing of audio-visual stimuli. The dominance was predicted qualitatively by considering the better temporal localization of audition, but the quantitative fit was less than perfect, with more weight being given to audition than predicted from thresholds. As predicted by optimal cue combination, the temporal localization of audio-visual stimuli was better than for either sense alone.

Thompson, P. & Burr, D. (2009). Visual aftereffects,Curr Biol, 1 (19), R11-14. PDF

Arrighi, R., Marini, F. & Burr, D. (2009). Meaningful auditory information enhances perception of visual biological motion,J Vis, 4 (9), 25 21-27. PDF

Robust perception requires efficient integration of information from our various senses. Much recent electrophysiology points to neural areas responsive to multisensory stimulation, particularly audiovisual stimulation. However, psychophysical evidence for functional integration of audiovisual motion has been ambiguous. In this study we measure perception of an audiovisual form of biological motion, tap dancing. The results show that the audio tap information interacts with visual motion information, but only when in synchrony, demonstrating a functional combination of audiovisual information in a natural task. The advantage of multimodal combination was better than the optimal maximum likelihood prediction.

2008 (back to top)

Burr, D. & Ross, J. (2008). A visual sense of number,Curr Biol, 6 (18), 425-428. PDF

Evidence exists for a nonverbal capacity for the apprehension of number, in humans [1] (including infants [2, 3]) and in other primates [4-6]. Here, we show that perceived numerosity is susceptible to adaptation, like primary visual properties of a scene, such as color, contrast, size, and speed. Apparent numerosity was decreased by adaptation to large numbers of dots and increased by adaptation to small numbers, the effect depending entirely on the numerosity of the adaptor, not on contrast, size, orientation, or pixel density, and occurring with very low adaptor contrasts. We suggest that the visual system has the capacity to estimate numerosity and that it is an independent primary visual property, not reducible to others like spatial frequency or density of texture [7].

Ross, J. & Burr, D. (2008). The knowing visual self,Trends Cogn Sci, 10 (12), 363-364. PDF

Like all information-processing systems, biological visual systems are limited by internal and external noise; but this noise never actually impinges on our conscious perception. An article recently published in the Journal of Vision suggests that, at least for orientation judgments, the visual system has access to its own noisiness and sets thresholds accordingly. This could well be a general principle in perception, with important and wide ranging consequences.

Gori, M., Del Viva, M., Sandini, G. & Burr, D. C. (2008). Young children do not integrate visual and haptic form information,Curr Biol, 9 (18), 694-698. PDF

Several studies have shown that adults integrate visual and haptic information (and information from other modalities) in a statistically optimal fashion, weighting each sense according to its reliability [1, 2]. When does this capacity for crossmodal integration develop? Here, we show that prior to 8 years of age, integration of visual and haptic spatial information is far from optimal, with either vision or touch dominating totally, even in conditions in which the dominant sense is far less precise than the other (assessed by discrimination thresholds). For size discrimination, haptic information dominates in determining both perceived size and discrimination thresholds, whereas for orientation discrimination, vision dominates. By 8-10 years, the integration becomes statistically optimal, like adults. We suggest that during development, perceptual systems require constant recalibration, for which cross-sensory comparison is important. Using one sense to calibrate the other precludes useful combination of the two sources.

2007 (back to top)

Tozzi, A., Morrone, M. C. & Burr, D. C. (2007). The effect of optokinetic nystagmus on the perceived position of briefly flashed targets,Vision Res, 6 (47), 861-868. PDF

Stimuli flashed briefly around the time of an impending saccade are mislocalized in the direction of the saccade and also compressed towards the saccadic target. Similarly, targets flashed during pursuit eye movements are mislocalized in the direction of pursuit. Here, we investigate the effects of optokinetic nystagmus (OKN) on visual localization. Subjects passively viewed a wide-field drifting grating that elicited strong OKN, comprising the characteristic slow-phase tracking movement interspersed with corrected "saccade-like" fast-phase movements. Subjects reported the apparent position of salient bars flashed briefly at various positions on the screen. In general, bars were misperceived in the direction of the slow-phase tracking movement. Bars flashed around the onset of the fast-phase movements were subject to much less mislocalization, pointing to a competing shift in the direction of the fast-phase, as occurs with saccades. However, as distinct from saccades, there was no evidence for spatial compression around the time of the corrective fast-phase OKN. The results suggest that OKN cause perceptual mislocalizations similar to those of smooth pursuit and saccades, but there are some differences in the nature of the mislocalizations, pointing to different perceptual mechanisms associated with the different types of eye movements.

Pellicano, E., Jeffery, L., Burr, D. & Rhodes, G. (2007). Abnormal adaptive face-coding mechanisms in children with autism spectrum disorder,Curr Biol, 17 (17), 1508-1512. PDF

In low-level vision, exquisite sensitivity to variation in luminance is achieved by adaptive mechanisms that adjust neural sensitivity to the prevailing luminance level. In high-level vision, adaptive mechanisms contribute to our remarkable ability to distinguish thousands of similar faces [1]. A clear example of this sort of adaptive coding is the face-identity aftereffect [2, 3, 4, 5], in which adaptation to a particular face biases perception toward the opposite identity. Here we investigated face adaptation in children with autism spectrum disorder (ASD) by asking them to discriminate between two face identities, with and without prior adaptation to opposite-identity faces. The ASD group discriminated the identities with the same precision as did the age- and ability-matched control group, showing that face identification per se was unimpaired. However, children with ASD showed significantly less adaptation than did their typical peers, with the amount of adaptation correlating significantly with current symptomatology, and face aftereffects of children with elevated symptoms only one third those of controls. These results show that although children with ASD can learn a simple discrimination between two identities, adaptive face-coding mechanisms are severely compromised, offering a new explanation for previously reported face-perception difficulties [6, 7, 8] and possibly for some of the core social deficits in ASD [9, 10].

Burr, D., Tozzi, A. & Morrone, M. C. (2007). Neural mechanisms for timing visual events are spatially selective in real-world coordinates,Nat Neurosci, 4 (10), 423-425. PDF

It is generally assumed that perceptual events are timed by a centralized supramodal clock. This study challenges this notion in humans by providing clear evidence that visual events of subsecond duration are timed by visual neural mechanisms with spatially circumscribed receptive fields, localized in real-world, rather than retinal, coordinates.

d'Avossa, G., Tosetti, M., Crespi, S., Biagi, L., Burr, D. C. & Morrone, M. C. (2007). Spatiotopic selectivity of BOLD responses to visual motion in human area MT,Nat Neurosci, 2 (10), 249-255. PDF

Many neurons in the monkey visual extrastriate cortex have receptive fields that are affected by gaze direction. In humans, psychophysical studies suggest that motion signals may be encoded in a spatiotopic fashion. Here we use functional magnetic resonance imaging to study spatial selectivity in the human middle temporal cortex (area MT or V5), an area that is clearly implicated in motion perception. The results show that the response of MT is modulated by gaze direction, generating a spatial selectivity based on screen rather than retinal coordinates. This area could be the neurophysiological substrate of the spatiotopic representation of motion signals.

Ciaramelli, E., Leo, F., Del Viva, M. M., Burr, D. C. & Ladavas, E. (2007). The contribution of prefrontal cortex to global perception,Exp Brain Res, 3 (181), 427-434. PDF

Recent research suggests a role of top-down modulatory signals on perceptual processing, particularly for the integration of local elementary information to form a global holistic percept. In this study we investigated whether prefrontal cortex may be instrumental in this top-down modulation in humans. We measured detection thresholds for perceiving a circle defined by a closed chain of grating patches in 6 patients with prefrontal lesions, 4 control patients with temporal lesions and 17 healthy control subjects. Performance of patients with prefrontal lesions was worse than that of patients with temporal lesions and normal controls when the patterns were sparse, requiring integration across relatively extensive regions of space, but similar to the control groups for denser patterns. The results clearly implicate the prefrontal cortex in the process of integrating elementary features into a holistic global percept, when the elements do not form a "pop-out" display.

Chirimuuta, M., Burr, D. & Morrone, M. C. (2007). The role of perceptual learning on modality-specific visual attentional effects,Vision Res, 1 (47), 60-70. PDF

Morrone et al. [Morrone, M. C., Denti, V., & Spinelli, D. (2002). Color and luminance contrasts attract independent attention. Current Biology, 12, 1134-1137] reported that the detrimental effect on contrast discrimination thresholds of performing a concomitant task is modality specific: performing a secondary luminance task has no effect on colour contrast thresholds, and vice versa. Here we confirm this result with a novel task involving learning of spatial position, and go on to show that it is not specific to the cardinal colour axes: secondary tasks with red-green stimuli impede performance on a blue-yellow task and vice versa. We further show that the attentional effect can be abolished with continued training over 2-4 training days (2-20 training sessions), and that the effect of learning is transferable to new target positions. Given the finding of transference, we discuss the possibility that V4 is a site of plasticity for both stimulus types, and that the separation is due to a luminance-colour separation within this cortical area.

Binda, P., Bruno, A., Burr, D. C. & Morrone, M. C. (2007). Fusion of visual and auditory stimuli during saccades: a Bayesian explanation for perisaccadic distortions,J Neurosci, 32 (27), 8525-8532. PDF

Brief stimuli presented near the onset of saccades are grossly mislocalized in space. In this study, we investigated whether the Bayesian hypothesis of optimal sensory fusion could account for the mislocalization. We required subjects to localize visual, auditory, and audiovisual stimuli at the time of saccades (compared with an earlier presented target). During fixation, vision dominates and spatially "captures" the auditory stimulus (the ventriloquist effect). But for perisaccadic presentations, auditory localization becomes more important, so the mislocalized visual stimulus is seen closer to its veridical position. The precision of the bimodal localization (as measured by localization thresholds or just-noticeable difference) was better than either the visual or acoustic stimulus presented in isolation. Both the perceived position of the bimodal stimuli and the improved precision were well predicted by assuming statistically optimal Bayesian-like combination of visual and auditory signals. Furthermore, the time course of localization was well predicted by the Bayesian approach. We present a detailed model that simulates the time-course data, assuming that perceived position is given by the sum of retinal position and a sluggish noisy eye-position signal, obtained by integrating optimally the output of two populations of neural activity: one centered at the current point of gaze, the other centered at the future point of gaze.

2006 (back to top)

Alais, D., Morrone, C. & Burr, D. (2006). Separate attentional resources for vision and audition,Proc Biol Sci, 1592 (273), 1339-1345. PDF

Current models of attention, typically claim that vision and audition are limited by a common attentional resource which means that visual performance should be adversely affected by a concurrent auditory task and vice versa. Here, we test this implication by measuring auditory (pitch) and visual (contrast) thresholds in conjunction with cross-modal secondary tasks and find that no such interference occurs. Visual contrast discrimination thresholds were unaffected by a concurrent chord or pitch discrimination, and pitch-discrimination thresholds were virtually unaffected by a concurrent visual search or contrast discrimination task. However, if the dual tasks were presented within the same modality, thresholds were raised by a factor of between two (for visual discrimination) and four (for auditory discrimination). These results suggest that at least for low-level tasks such as discriminations of pitch and contrast, each sensory modality is under separate attentional control, rather than being limited by a supramodal attentional resource. This has implications for current theories of attention as well as for the use of multi-sensory media for efficient informational transmission.

Burr, D. & Morrone, C. (2006). Perception: transient disruptions to neural space-time,Curr Biol, 19 (16), R847-849. PDF

How vision operates efficiently in the face of continuous shifts of gaze remains poorly understood. Recent studies show that saccades cause dramatic, but transient, changes in the spatial and also temporal tuning of cells in many visual areas, which may underly the perceptual compression of space and time, and serve to counteract the effects of the saccades and maintain visual stability.

Burr, D., McKee, S. & Morrone, C. M. (2006). Resolution for spatial segregation and spatial localization by motion signals,Vision Res, 6-7 (46), 932-939. PDF

We investigated two types of spatial resolution for perceiving motion-defined contours: grating acuity, the capacity to discriminate alternating stripes of opposed motion from transparent bi-directional motion; and alignment acuity, the capacity to localize the position of motion-defined edges with respect to stationary markers. For both tasks the stimuli were random noise patterns, low-pass filtered in the spatial dimension parallel to the motion. Both grating and alignment resolution varied systematically with spatial frequency cutoff and speed. Best performance for grating resolution was about 10 c/deg (for unfiltered patterns moving at 1-4 deg/s), corresponding to a stripe resolution of about 3'. Grating resolution corresponds well to estimates of smallest receptive field size of motion units under these conditions, suggesting that opposing signals from units with small receptive fields (probably located in V1) are contrasted efficiently to define edges. Alignment resolution was about 2' at best, under similar conditions. Whereas alignment judgment based on luminance-defined edges is typically 3-10 times better than resolution, alignment based on motion-defined edges is only 1.1-1.5 times better, suggesting motion contours are less effectively encoded than luminance contours.

Burr, D. & Ross, J. (2006). The effects of opposite-polarity dipoles on the detection of Glass patterns,Vision Res, 6-7 (46), 1139-1144. PDF

Glass patterns--randomly positioned coherently orientated dipoles--create a strong sensation of oriented spatial structure. On the other hand, coherently oriented dipoles comprising dots of opposite polarity ("anti-Glass" patterns) have no distinct spatial structure and are very hard to distinguish from random noise. Although anti-Glass patterns have no obvious spatial structure themselves, their presence can destroy the structure created by Glass patterns. We measured the strength of this effect for both static and dynamic Glass patterns, and showed that anti-Glass patterns can raise thresholds for Glass patterns by a factor of 2-4, increasing with density. The dependence on density suggests that the interactions occur at a local level. When the Glass and anti-Glass dipoles were confined to alternate strips (in translational and circular Glass patterns), the detrimental effect occurred for stripe widths less than about 1.5 degrees, but had little effect for larger stripe widths, reinforcing the suggestion that the interaction occurred over a limited spatial extent. The extent of spatial interaction was much less than that for spatial summation of these patterns, at least 30 degrees under matched experimental conditions. The results suggest two stages of analysis for Glass patterns, an early stage of limited spatial extent where orientation is extracted, and a later stage that sums these orientation signals.

Burr, D. & Alais, D. (2006). Combining visual and auditory information,Prog Brain Res, (155), 243-258. PDF

Robust perception requires that information from by our five different senses be combined at some central level to produce a single unified percept of the world. Recent theory and evidence from many laboratories suggests that the combination does not occur in a rigid, hardwired fashion, but follows flexible situation-dependent rules that allow information to be combined with maximal efficiency. In this review we discuss recent evidence from our laboratories investigating how information from auditory and visual modalities is combined. The results support the notion of Bayesian combination. We also examine temporal alignment of auditory and visual signals, and show that perceived simultaneity does not depend solely on neural latencies, but involves active processes that compensate, for example, for the physical delay introduced by the relatively slow speed of sound. Finally, we go on to show that although visual and auditory information is combined to maximize efficiency, attentional resources for the two modalities are largely independent.

Burr, D. & Morrone, C. (2006). Time perception: space-time in the brain,Curr Biol, 5 (16), R171-173. PDF

Arrighi, R., Alais, D. & Burr, D. (2006). Perceptual synchrony of audiovisual streams for natural and artificial motion sequences,J Vis, 3 (6), 260-268. PDF

We investigated the conditions necessary for perceptual simultaneity of visual and auditory stimuli under natural conditions: video sequences of conga drumming at various rhythms. Under most conditions, the auditory stream needs to be delayed for sight and sound to be perceived simultaneously. The size of delay for maximum perceived simultaneity varied inversely with drumming tempo, from about 100 ms at 1 Hz to 30 ms at 4 Hz. Random drumming motion produced similar results, with higher random tempos requiring less delay. Video sequences of disk stimuli moving along a motion profile matched to the drummer produced near-identical results. When the disks oscillated at constant speed rather than following "biological" speed variations, the delays necessary for perceptual synchrony were systematically less. The results are discussed in terms of real-world constraints for perceptual synchrony and possible neural mechanisms.

Baldassi, S., Megna, N. & Burr, D. C. (2006). Visual clutter causes high-magnitude errors,PLoS Biol, 3 (4), e56. PDF

Perceptual decisions are often made in cluttered environments, where a target may be confounded with competing "distractor" stimuli. Although many studies and theoretical treatments have highlighted the effect of distractors on performance, it remains unclear how they affect the quality of perceptual decisions. Here we show that perceptual clutter leads not only to an increase in judgment errors, but also to an increase in perceived signal strength and decision confidence on erroneous trials. Observers reported simultaneously the direction and magnitude of the tilt of a target grating presented either alone, or together with vertical distractor stimuli. When presented in isolation, observers perceived isolated targets as only slightly tilted on error trials, and had little confidence in their decision. When the target was embedded in distractors, however, they perceived it to be strongly tilted on error trials, and had high confidence of their (erroneous) decisions. The results are well explained by assuming that the observers' internal representation of stimulus orientation arises from a nonlinear combination of the outputs of independent noise-perturbed front-end detectors. The implication that erroneous perceptual decisions in cluttered environments are made with high confidence has many potential practical consequences, and may be extendable to decision-making in general.

You are here: People Faculty David Burr