Ju-Chieh Kevin Cheng1,2, Connor Bevington2, Arman Rahmim2,3, Ivan Klyuzhin4, Julian Matthews5, Ronald Boellaard6,7, Vesna Sossi2. 1. Pacific Parkinson's Research Centre, The University of British Columbia, 2215 Wesbrook Mall, Vancouver, BC, V6T 1Z3, Canada. 2. Department of Physics and Astronomy, The University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T 1Z1, Canada. 3. Department of Radiology, University of British Columbia, Vancouver, BC, V5Z 1M9, Canada. 4. Department of Medicine, Division of Neurology, University of British Columbia, Vancouver, BC, V6T 2B5, Canada. 5. Division of Neuroscience and Experimental Psychology, Wolfson Molecular Imaging Centre, The University of Manchester, Manchester, M20 3LJ, UK. 6. Department of Radiology and Nuclear Medicine, VU University Medical Center, De Boelelaan 1117, Amsterdam, 1081 HV, Netherlands. 7. Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Hanzeplein 1, 9713 KC, Groningen, Netherlands.
Abstract
PURPOSE: Reconstructed PET images are typically noisy, especially in dynamic imaging where the acquired data are divided into several short temporal frames. High noise in the reconstructed images translates to poor precision/reproducibility of image features. One important role of "denoising" is therefore to improve the precision of image features. However, typical denoising methods achieve noise reduction at the expense of accuracy. In this work, we present a novel four-dimensional (4D) denoised image reconstruction framework, which we validate using 4D simulations, experimental phantom, and clinical patient data, to achieve 4D noise reduction while preserving spatiotemporal patterns/minimizing error introduced by denoising. METHODS: Our proposed 4D denoising operator/kernel is based on HighlY constrained backPRojection (HYPR), which is applied either after each update of OSEM reconstruction of dynamic 4D PET data or within the recently proposed kernelized reconstruction framework inspired by kernel methods in machine learning. Our HYPR4D kernel makes use of the spatiotemporal high frequency features extracted from a 4D composite, generated within the reconstruction, to preserve the spatiotemporal patterns and constrain the 4D noise increment of the image estimate. RESULTS: Results from simulations, experimental phantom, and patient data showed that the HYPR4D kernel with our proposed 4D composite outperformed other denoising methods, such as the standard OSEM with spatial filter, OSEM with 4D filter, and HYPR kernel method with the conventional 3D composite in conjunction with recently proposed High Temporal Resolution kernel (HYPRC3D-HTR), in terms of 4D noise reduction while preserving the spatiotemporal patterns or 4D resolution within the 4D image estimate. Consequently, the error in outcome measures obtained from the HYPR4D method was less dependent on the region size, contrast, and uniformity/functional patterns within the target structures compared to the other methods. For outcome measures that depend on spatiotemporal tracer uptake patterns such as the nondisplaceable Binding Potential (BPND ), the root mean squared error in regional mean of voxel BPND values was reduced from ~8% (OSEM with spatial or 4D filter) to ~3% using HYPRC3D-HTR and was further reduced to ~2% using our proposed HYPR4D method for relatively small target structures (~10 mm in diameter). At the voxel level, HYPR4D produced two to four times lower mean absolute error in BPND relative to HYPRC3D-HTR. CONCLUSION: As compared to conventional methods, our proposed HYPR4D method can produce more robust and accurate image features without requiring any prior information.
PURPOSE: Reconstructed PET images are typically noisy, especially in dynamic imaging where the acquired data are divided into several short temporal frames. High noise in the reconstructed images translates to poor precision/reproducibility of image features. One important role of "denoising" is therefore to improve the precision of image features. However, typical denoising methods achieve noise reduction at the expense of accuracy. In this work, we present a novel four-dimensional (4D) denoised image reconstruction framework, which we validate using 4D simulations, experimental phantom, and clinical patient data, to achieve 4D noise reduction while preserving spatiotemporal patterns/minimizing error introduced by denoising. METHODS: Our proposed 4D denoising operator/kernel is based on HighlY constrained backPRojection (HYPR), which is applied either after each update of OSEM reconstruction of dynamic 4D PET data or within the recently proposed kernelized reconstruction framework inspired by kernel methods in machine learning. Our HYPR4D kernel makes use of the spatiotemporal high frequency features extracted from a 4D composite, generated within the reconstruction, to preserve the spatiotemporal patterns and constrain the 4D noise increment of the image estimate. RESULTS: Results from simulations, experimental phantom, and patient data showed that the HYPR4D kernel with our proposed 4D composite outperformed other denoising methods, such as the standard OSEM with spatial filter, OSEM with 4D filter, and HYPR kernel method with the conventional 3D composite in conjunction with recently proposed High Temporal Resolution kernel (HYPRC3D-HTR), in terms of 4D noise reduction while preserving the spatiotemporal patterns or 4D resolution within the 4D image estimate. Consequently, the error in outcome measures obtained from the HYPR4D method was less dependent on the region size, contrast, and uniformity/functional patterns within the target structures compared to the other methods. For outcome measures that depend on spatiotemporal tracer uptake patterns such as the nondisplaceable Binding Potential (BPND ), the root mean squared error in regional mean of voxel BPND values was reduced from ~8% (OSEM with spatial or 4D filter) to ~3% using HYPRC3D-HTR and was further reduced to ~2% using our proposed HYPR4D method for relatively small target structures (~10 mm in diameter). At the voxel level, HYPR4D produced two to four times lower mean absolute error in BPND relative to HYPRC3D-HTR. CONCLUSION: As compared to conventional methods, our proposed HYPR4D method can produce more robust and accurate image features without requiring any prior information.