| Literature DB >> 29705690 |
Haiguang Wen1, Junxing Shi1, Wei Chen2, Zhongming Liu3.
Abstract
Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features.Entities:
Keywords: Bayesian inference; Deep learning; Incremental learning; Natural vision; Neural encoding
Mesh:
Year: 2018 PMID: 29705690 PMCID: PMC5976558 DOI: 10.1016/j.neuroimage.2018.04.053
Source DB: PubMed Journal: Neuroimage ISSN: 1053-8119 Impact factor: 6.556