| Literature DB >> 27752088 |
Laurent Caplette1, Bruno Wicker2, Frédéric Gosselin1.
Abstract
In neurotypical observers, it is widely believed that the visual system samples the world in a coarse-to-fine fashion. Past studies on Autism Spectrum Disorder (ASD) have identified atypical responses to fine visual information but did not investigate the time course of the sampling of information at different levels of granularity (i.e. Spatial Frequencies, SF). Here, we examined this question during an object recognition task in ASD and neurotypical observers using a novel experimental paradigm. Our results confirm and characterize with unprecedented precision a coarse-to-fine sampling of SF information in neurotypical observers. In ASD observers, we discovered a different pattern of SF sampling across time: in the first 80 ms, high SFs lead ASD observers to a higher accuracy than neurotypical observers, and these SFs are sampled differently across time in the two subject groups. Our results might be related to the absence of a mandatory precedence of global information, and to top-down processing abnormalities in ASD.Entities:
Mesh:
Year: 2016 PMID: 27752088 PMCID: PMC5067503 DOI: 10.1038/srep35494
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Use of SFs across time in each group and differences between the groups.
Upper panel: One-sample t maps illustrating how each SF in each time frame correlates with accurate object recognition, and one-sample t vectors indicating the slope of use of each SF across time, for the neurotypical and ASD groups. Lower panel: Two-sample t map illustrating the between-group differences in the use of each SF in each time frame, and two-sample t vector indicating the between-group differences in the slope of use of each SF across time. Pixels enclosed by black lines and bold portions of the vectors are significant (p < 0.05, FWER-corrected). Note that since the width of all images subtended 6 degrees of visual angle, cycles per image can be converted in cycles per degree by dividing by 6. Note also that only statistics up to 64 cycles per image (cpi) are shown; there are no statistically significant results in the portions not shown. Note finally that color axes are different in the panels.
Figure 2Illustration of the sampling method.
On each trial, we randomly generated a matrix of dimensions 256 × 40 (representing respectively SFs and frames) in which most elements were zeros and a few were ones. We then convolved this sparse matrix with a 2D Gaussian kernel (a “bubble”). This resulted in the trial’s sampling matrix, shown here as a plane with a number of randomly located bubbles. Every column of this sampling matrix was then rotated around its origin to create isotropic 2D random filters. Finally, these 2D random filters were dot-multiplied by the base image’s spectrum and inverse fast Fourier transformed to create a filtered version of the image for every video frame.