| Literature DB >> 35583972 |
Polina Iamshchinina1,2, Agnessa Karapetian1, Daniel Kaiser3,4, Radoslaw M Cichy1,2.
Abstract
Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG (n = 48) and time-resolved multivariate pattern analysis to investigate 1) the time course with which object category information emerges in the auditory modality and 2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that 1) auditory object category representations can be reliably extracted from EEG signals and 2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects' category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code.NEW & NOTEWORTHY Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects' category membership.Entities:
Keywords: EEG; MVPA; auditory modality; object categorization; visual modality
Mesh:
Year: 2022 PMID: 35583972 PMCID: PMC9190735 DOI: 10.1152/jn.00515.2021
Source DB: PubMed Journal: J Neurophysiol ISSN: 0022-3077 Impact factor: 2.974
Figure 1.Experimental design. A: the stimulus set consisted of 48 objects belonging to 3 categorical divisions. In the visual runs, participants viewed images of these objects, whereas in the auditory runs, they heard the names of the objects. B: both in visual (left) and auditory (right) runs, participants were presented with a random sequence of stimuli. Their task was to press a button when two subsequent stimuli were identical (one-back task).
Figure 2.Classification results. A: object information time course in the visual modality B: category information time course in the visual modality averaged across decoding results obtained for each pair of categorical divisions. C: object information time course in the auditory modality D: category information time course in the auditory modality averaged across decoding results obtained for each pair of categorical divisions. E: category information time course, where classifiers were trained on one modality and tested on the other modality. Results are averaged for both train/test directions. F: time generalization results for category information, where classifiers were trained on one modality and tested on the other modality. Results are averaged for both train/test directions. The onset of the stimulus presentation is at 0 ms. Note the different scaling across modalities. Error bars in A–E denote between-participant SE. Rows of asterisks in A–D indicate significant time points (one-sided permutation test, P < 0.05, corrected for multiple comparisons).