| Literature DB >> 34955739 |
Qianyi Zhan1,2, Yuanyuan Liu1,2, Yuan Liu1,2, Wei Hu3.
Abstract
18F-FDG positron emission tomography (PET) imaging of brain glucose use and amyloid accumulation is a research criteria for Alzheimer's disease (AD) diagnosis. Several PET studies have shown widespread metabolic deficits in the frontal cortex for AD patients. Therefore, studying frontal cortex changes is of great importance for AD research. This paper aims to segment frontal cortex from brain PET imaging using deep neural networks. The learning framework called Frontal cortex Segmentation model of brain PET imaging (FSPET) is proposed to tackle this problem. It combines the anatomical prior to frontal cortex into the segmentation model, which is based on conditional generative adversarial network and convolutional auto-encoder. The FSPET method is evaluated on a dataset of 30 brain PET imaging with ground truth annotated by a radiologist. Results that outperform other baselines demonstrate the effectiveness of the FSPET framework.Entities:
Keywords: Alzheimer's disease; PET; brain image segmentation; conditional generative adversarial network; convolutional auto-encoder
Year: 2021 PMID: 34955739 PMCID: PMC8694272 DOI: 10.3389/fnins.2021.796172
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1A brain positron emission tomography (PET)/computed tomography (CT) fusion image. The first line is the PET imaging, and the second line is the CT imaging. The fusion imaging of PET/CT is list in the third line. Each line from left to right is (a) coronal section, (b) median sagittal scan, and (c) transverse section.
Figure 2The frontal cortex in the brain: the left is anatomical location, and the right is for 18F-FDG positron emission tomography (PET) imaging.
Figure 3Framework of proposed model FSPET based on conditional generative adversarial network (cGAN) and convolutional auto-encoder (CAE).
Figure 4FSPET architecture: (A) convolutional auto-encoder (CAE), (B) Generator (G), and (C) Discriminator (D).
Loss in the proposed FSPET model.
|
|
|
|
|
|
|---|---|---|---|---|
|
| Dice | Prediction and ground truth | U-net G | |
|
| BCE | Prediction and input | Discriminator D | |
|
| Euclidean | Prediction and ground truth | encoder in CAE |
|
Quantitative assessment of U-net (Ronneberger et al., 2015), ACNN (Oktay et al., 2017), cGAN-Unet (Singh et al., 2018), and the FSPET model.
|
|
|
|
|
|
|
|---|---|---|---|---|---|
| U-net | 71.03 ± 21.37 | 55.04 ± 20.43 | 72.29 ± 26.76 | 38.73 ± 30.46 | |
| ACNN | 74.57 ± 18.34 | 59.45 ± 19.17 | 78.37 ± 23.85 | 97.24 ± 1.88 | 35.48 ± 27.83 |
| cGAN-Unet | 79.04 ± 19.59 | 65.34 ± 14.45 | 79.75 ± 21.53 | 97.18 ± 2.57 | 30.32 ± 29.12 |
| FSPET | 96.93 ± 2.27 |
Bold results indicate the best scores.
Figure 5Frontal cortex segmentation in median sagittal section of brain positron emission tomography (PET) imaging using U-net, ACNN, cGAN-Unet, and the FSPET model. Ground truth and predicted contour are in red and black, respectively. (A) U-net. (B) ACNN. (C) cGAN-Unet. (D) FSPET.