| Literature DB >> 28066842 |
Mingxia Liu1, Jun Zhang1, Pew-Thian Yap1, Dinggang Shen1.
Abstract
Effectively utilizing incomplete multi-modality data for diagnosis of Alzheimer's disease (AD) is still an area of active research. Several multi-view learning methods have recently been developed to deal with missing data, with each view corresponding to a specific modality or a combination of several modalities. However, existing methods usually ignore the underlying coherence among views, which may lead to suboptimal learning performance. In this paper, we propose a view-aligned hypergraph learning (VAHL) method to explicitly model the coherence among the views. Specifically, we first divide the original data into several views based on possible combinations of modalities, followed by a sparse representation based hypergraph construction process in each view. A view-aligned hypergraph classification (VAHC) model is then proposed, by using a view-aligned regularizer to model the view coherence. We further assemble the class probability scores generated from VAHC via a multi-view label fusion method to make a final classification decision. We evaluate our method on the baseline ADNI-1 database having 807 subjects and three modalities (i.e., MRI, PET, and CSF). Our method achieves at least a 4.6% improvement in classification accuracy compared with state-of-the-art methods for AD/MCI diagnosis.Entities:
Mesh:
Year: 2016 PMID: 28066842 PMCID: PMC5207479 DOI: 10.1007/978-3-319-46720-7_36
Source DB: PubMed Journal: Med Image Comput Comput Assist Interv