| Literature DB >> 28801255 |
Satoshi Nishida1, Shinji Nishimoto2.
Abstract
Natural visual scenes induce rich perceptual experiences that are highly diverse from scene to scene and from person to person. Here, we propose a new framework for decoding such experiences using a distributed representation of words. We used functional magnetic resonance imaging (fMRI) to measure brain activity evoked by natural movie scenes. Then, we constructed a high-dimensional feature space of perceptual experiences using skip-gram, a state-of-the-art distributed word embedding model. We built a decoder that associates brain activity with perceptual experiences via the distributed word representation. The decoder successfully estimated perceptual contents consistent with the scene descriptions by multiple annotators. Our results illustrate three advantages of our decoding framework: (1) three types of perceptual contents could be decoded in the form of nouns (objects), verbs (actions), and adjectives (impressions) contained in 10,000 vocabulary words; (2) despite using such a large vocabulary, we could decode novel words that were absent in the datasets to train the decoder; and (3) the inter-individual variability of the decoded contents co-varied with that of the contents of scene descriptions. These findings suggest that our decoding framework can recover diverse aspects of perceptual experiences in naturalistic situations and could be useful in various scientific and practical applications.Entities:
Keywords: Decoding; Humans; Natural language processing; Natural vision; Semantic perception; fMRI
Mesh:
Year: 2017 PMID: 28801255 DOI: 10.1016/j.neuroimage.2017.08.017
Source DB: PubMed Journal: Neuroimage ISSN: 1053-8119 Impact factor: 6.556