| Literature DB >> 33654579 |
Yixue Feng1, Kefei Liu2, Mansu Kim2, Qi Long2, Xiaohui Yao2, Li Shen2.
Abstract
We present an effective deep multiview learning framework to identify population structure using multimodal imaging data. Our approach is based on canonical correlation analysis (CCA). We propose to use deep generalized CCA (DGCCA) to learn a shared latent representation of non-linearly mapped and maximally correlated components from multiple imaging modalities with reduced dimensionality. In our empirical study, this representation is shown to effectively capture more variance in original data than conventional generalized CCA (GCCA) which applies only linear transformation to the multi-view data. Furthermore, subsequent cluster analysis on the new feature set learned from DGCCA is able to identify a promising population structure in an Alzheimer's disease (AD) cohort. Genetic association analyses of the clustering results demonstrate that the shared representation learned from DGCCA yields a population structure with a stronger genetic basis than several competing feature learning methods.Entities:
Keywords: Deep learning; deep generalized canonical correlation analysis; image-driven population structure; multimodal imaging; multiview learning
Year: 2020 PMID: 33654579 PMCID: PMC7917002 DOI: 10.1109/bibe50027.2020.00057
Source DB: PubMed Journal: Proc IEEE Int Symp Bioinformatics Bioeng ISSN: 2159-5410