| Literature DB >> 30417316 |
Fang Liu1,2, Hyungseok Jang3, Richard Kijowski3, Gengyan Zhao4, Tyler Bradshaw3, Alan B McMillan3.
Abstract
BACKGROUND: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected 18F-fluorodeoxyglucose (18F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction.Entities:
Keywords: Attenuation correction; CT; Deep learning; MRI; PET; PET/CT; PET/MR
Year: 2018 PMID: 30417316 PMCID: PMC6230542 DOI: 10.1186/s40658-018-0225-8
Source DB: PubMed Journal: EJNMMI Phys ISSN: 2197-7364
Fig. 1Schematic illustration of convolutional encoder-decoder in this study. This network consists of multiple symmetrical shortcut connection (SC) from the start layer (SL) in the encoder to the insert layer (IL) in the decoder. The insertion of SC follows the strategy of the deep residual network described in Reference [22]
Fig. 2Schematic illustration of deepAC. The process consists of a training phase and a reconstruction phase. The training phase is first performed with NAC and co-registered CT data, after which the well-trained network is fixed and ready for generating pseudo-CTs for new PET data in the reconstruction phase
Fig. 3Example of pseudo-CT image from deepAC. Multiple axial slices from a the input NAC PET image, b the pseudo-CT generated using deepAC, and c the acquired CT. The 3D surface and bone model indicate a high similarity between the acquired CT and pseudo-CT. The surface and bone were rendered using a HU value of − 400 and 300, respectively. The training loss curve is shown in d
Fig. 4Example of pseudo-CT image from a non-compliant subject. Axial and sagittal slices from a the input NAC PET image, b the pseudo-CT generated using deepAC, and c the acquired CT. Note that there is a noticeable movement between PET and CT scans (red arrow). The generated pseudo-CT from deepAC is free from subject motion since it is directly obtained from PET data
Fig. 5PET reconstruction using a deepAC and b acquired CT-based attenuation correction (CTAC) for a 48-year-old female subject. c Relative error was calculated using the PET image reconstructed using CTAC. Low reconstructed PET error is observed by using the proposed deepAC approach
Fig. 6PET reconstruction using a deepAC and b acquired CT-based attenuation correction (CTAC) for an 80-year-old female with a significant right and frontal skull abnormality. The missing parts of the skull were indicated by red arrows in real CT image. Low reconstructed PET error is observed by using the proposed deepAC approach given the case of skull abnormality
Fig. 7PET reconstruction using a deepAC and b acquired CT-based attenuation correction (CTAC) for a 59-year-old male with a brain tumor. The tumor region was indicated by a red arrow in CTAC PET image. Low reconstructed PET error is observed by using the proposed deepAC approach given the presence of brain metastasis
Image error (mean ± standard deviation (minimum, maximum)) relative to CT attenuation correction of PET images reconstructed utilizing deepAC in various brain regions of 28 subjects and p values from paired t tests. p < 0.0024 is defined as the Bonferroni corrected significance level
| Brain regions | deepAC error (%) | deepAC absolute error (%) | |
|---|---|---|---|
| Frontal lobe left | − 1.04 ± 2.35 (− 6.67, 3.32) | 2.43 ± 1.57 (0.65, 6.67) | 0.18 |
| Frontal lobe right | − 1.15 ± 2.56 (− 6.61, 4.06) | 2.59 ± 1.69 (0.82, 6.62) | 0.15 |
| Temporal lobe left | − 0.79 ± 1.70 (− 4.45, 2.22) | 2.11 ± 0.94 (1.07, 4.67) | 0.18 |
| Temporal lobe right | − 0.73 ± 1.99 (− 4.77, 2.68) | 2.32 ± 0.96 (1.10, 4.88) | 0.055 |
| Parietal lobe left | − 1.70 ± 2.25 (− 5.56, 2.22) | 2.52 ± 1.63 (0.51, 5.56) | 0.01 |
| Parietal lobe right | − 1.92 ± 2.38 (− 5.60, 2.28) | 2.79 ± 1.61 (0.69, 5.60) | 0.005 |
| Occipital lobe left | − 1.78 ± 1.95 (− 6.24, 1.40) | 2.38 ± 1.43 (0.69, 6.26) | 0.01 |
| Occipital lobe right | − 1.92 ± 2.15 (− 6.12, 2.81) | 2.75 ± 1.25 (0.89, 6.12) | 0.004 |
| Cerebellum left | − 0.22 ± 1.62 (− 3.86, 2.68) | 1.70 ± 0.76 (0.67, 3.94) | 0.154 |
| Cerebellum right | − 0.27 ± 1.78 (− 3.79, 2.83) | 1.78 ± 0.85 (0.54, 3.79) | 0.146 |
| Brainstem | 0.69 ± 1.79 (− 3.20, 3.81) | 1.77 ± 0.88 (0.74, 3.81) | 0.354 |
| Caudate nucleus left | 0.37 ± 1.71 (− 3.38, 3.69) | 1.50 ± 0.85 (0.43, 3.69) | 0.613 |
| Caudate nucleus right | 0.32 ± 1.64 (− 3.47, 3.21) | 1.33 ± 0.86 (0.33, 3.47) | 0.451 |
| Putamen left | − 0.67 ± 1.58 (− 3.80, 2.31) | 1.41 ± 1.00 (0.24, 3.80) | 0.115 |
| Putamen right | − 0.74 ± 1.53 (− 3.94, 2.08) | 1.40 ± 1.02 (0.27, 3.94) | 0.009 |
| Thalamus left | − 0.07 ± 1.59 (− 3.90, 3.05) | 1.40 ± 0.88 (0.31, 3.91) | 0.172 |
| Thalamus right | 0.00 ± 1.56 (− 4.04, 3.27) | 1.34 ± 0.91 (0.20, 4.04) | 0.283 |
| Globus pallidus left | − 0.39 ± 1.51 (− 3.71, 3.11) | 1.25 ± 0.91 (0.12, 3.71) | 0.056 |
| Globus pallidus right | − 0.56 ± 1.41 (− 3.83, 2.35) | 1.24 ± 0.92 (0.29, 3.83) | 0.013 |
| Cingulate region left | − 0.50 ± 1.68 (− 4.02, 2.70) | 1.61 ± 0.95 (0.35 4.02) | 0.049 |
| Cingulate region right | − 0.45 ± 1.63 (− 3.81, 2.28) | 1.58 ± 0.87 (0.35, 3.81) | 0.057 |
| All regions | − 0.64 ± 1.99 (− 4.18, 2.22) | 1.74 ± 0.94 (0.29, 4.20) |