| Literature DB >> 36059988 |
Xue Ran1, Junyi Shi1, Yalan Chen1, Kui Jiang1.
Abstract
Neuroimaging has been widely used as a diagnostic technique for brain diseases. With the development of artificial intelligence, neuroimaging analysis using intelligent algorithms can capture more image feature patterns than artificial experience-based diagnosis. However, using only single neuroimaging techniques, e.g., magnetic resonance imaging, may omit some significant patterns that may have high relevance to the clinical target. Therefore, so far, combining different types of neuroimaging techniques that provide multimodal data for joint diagnosis has received extensive attention and research in the area of personalized medicine. In this study, based on the regularized label relaxation linear regression model, we propose a multikernel version for multimodal data fusion. The proposed method inherits the merits of the regularized label relaxation linear regression model and also has its own superiority. It can explore complementary patterns across different modal data and pay more attention to the modal data that have more significant patterns. In the experimental study, the proposed method is evaluated in the scenario of Alzheimer's disease diagnosis. The promising performance indicates that the performance of multimodality fusion via multikernel learning is better than that of single modality. Moreover, the decreased square difference between training and testing performance indicates that overfitting is reduced and hence the generalization ability is improved.Entities:
Keywords: magnetic resonance imaging; multikernel learning; multimodal data fusion; neuroimaging; personalized medicine; positron emission tomography
Year: 2022 PMID: 36059988 PMCID: PMC9428611 DOI: 10.3389/fphar.2022.947657
Source DB: PubMed Journal: Front Pharmacol ISSN: 1663-9812 Impact factor: 5.988
Representative works of multimodality fusion.
| Categories | Authors | Modalities | Methodologies |
|---|---|---|---|
| Pixel-level |
| MRI, PET | A model based on integrated intensity-hue-saturation and retina-inspired model was proposed to improve the fusion performance |
|
| SPECT, MRI | A method of multiscaled combination of MR and SPECT images based on variable-weight | |
|
| MRI and PET | A novel framework for spatially registered multimodal medical image fusion based on nonsubsampled contourlet transform | |
| Decision-level |
| MRI | A random forest feature selection, fusion, and ensemble strategy was applied to the classification and prediction of AD |
|
| MRI and PET | An SVM-based ensemble method was proposed and two modal data of the bilateral hippocampus volume and the bilateral entorhinal cortex volume as core features were used for AD prediction | |
|
| sMRI, PET, and CSF | An SVM-based ensemble method was proposed and the combined features of sMRI, PET, and CSF were used to build an ensemble classification model for AD prediction | |
| Feature-level |
| MRI and PET | A deep multimodal fusion network based on an attention mechanism, which was able to selectively extract deep features from MRI and PET was proposed to predict AD |
|
| MRI and PET | High-level latent and shared feature representations were extracted and fused from neuroimaging | |
|
| MRI and PET | Texture and morphological features were fused as a biomarker to diagnose AD. SVM was taken as the classifier |
FIGURE 1Data preprocessing: (A) magnetic resonance imaging (MRI) and (B) positron emission tomography (PET).
FIGURE 2“All-single” fusion strategy.
FIGURE 3Workflow of training.
Parameter settings.
| Methods | Parameter settings |
|---|---|
| RR | The regularized parameter was searched from 0.0001 to 1 |
| Our method | The regularized parameter |
| MV-TSK-FS | We use the parameter settings recommended by the original references |
| simpleMKL | We use the parameter settings recommended by the original references |
| RFF-MKL | We use the parameter settings recommended by the original references |
| MV-L2-SVM |
FIGURE 4Model selection of every single modality: (A) sMRI and (B) PET.
FIGURE 5Model selection of combined features.
FIGURE 6Performance comparison of sMRI, PET, and their combination.
Comparison with state-of-art multimodality methods in terms of accuracy and AUC.
| Methods | Accuracy | AUC |
|---|---|---|
| MV-TSK-FS | 0.9236 ± 0.0058* | 0.8897 ± 0.0032* |
| simpleMKL | 0.9454 ± 0.0047* | 0.9059 ± 0.0063* |
| RFF-MKL | 0.9402 ± 0.0025* | 0.8987 ± 0.0036* |
| MV-L2-SVM | 0.9489 ± 0.0046* | 0.9021 ± 0.0047* |
| Our method |
|
|
The bold means the best performance.
FIGURE 7Square difference against the regularized parameter .