PURPOSE: Most objective image quality metrics average over a wide range of image degradations. However, human clinicians demonstrate bias toward different types of artifacts. Here, we aim to create a perceptual difference model based on Case-PDM that mimics the preference of human observers toward different artifacts. METHOD: We measured artifact disturbance to observers and calibrated the novel perceptual difference model (PDM). To tune the new model, which we call Artifact-PDM, degradations were synthetically added to three healthy brain MR data sets. Four types of artifacts (noise, blur, aliasing or "oil painting" which shows up as flattened, over-smoothened regions) of standard compressed sensing (CS) reconstruction, within a reasonable range of artifact severity, as measured by both PDM and visual inspection, were considered. After the model parameters were tuned by each synthetic image, we used a functional measurement theory pair-comparison experiment to measure the disturbance of each artifact to human observers and determine the weights of each artifact's PDM score. To validate Artifact-PDM, human ratings obtained from a Double Stimulus Continuous Quality Scale experiment were compared to the model for noise, blur, aliasing, oil painting and overall qualities using a large set of CS-reconstructed MR images of varying quality. Finally, we used this new approach to compare CS to GRAPPA, a parallel MRI reconstruction algorithm. RESULTS: We found that, for the same Artifact-PDM score, the human observer found incoherent aliasing to be the most disturbing and noise the least. Artifact-PDM results were highly correlated to human observers in both experiments. Optimized CS reconstruction quality compared favorably to GRAPPA's for the same sampling ratio. CONCLUSIONS: We conclude our novel metric can faithfully represent human observer artifact evaluation and can be useful in evaluating CS and GRAPPA reconstruction algorithms, especially in studying artifact trade-offs.
PURPOSE: Most objective image quality metrics average over a wide range of image degradations. However, human clinicians demonstrate bias toward different types of artifacts. Here, we aim to create a perceptual difference model based on Case-PDM that mimics the preference of human observers toward different artifacts. METHOD: We measured artifact disturbance to observers and calibrated the novel perceptual difference model (PDM). To tune the new model, which we call Artifact-PDM, degradations were synthetically added to three healthy brain MR data sets. Four types of artifacts (noise, blur, aliasing or "oil painting" which shows up as flattened, over-smoothened regions) of standard compressed sensing (CS) reconstruction, within a reasonable range of artifact severity, as measured by both PDM and visual inspection, were considered. After the model parameters were tuned by each synthetic image, we used a functional measurement theory pair-comparison experiment to measure the disturbance of each artifact to human observers and determine the weights of each artifact's PDM score. To validate Artifact-PDM, human ratings obtained from a Double Stimulus Continuous Quality Scale experiment were compared to the model for noise, blur, aliasing, oil painting and overall qualities using a large set of CS-reconstructed MR images of varying quality. Finally, we used this new approach to compare CS to GRAPPA, a parallel MRI reconstruction algorithm. RESULTS: We found that, for the same Artifact-PDM score, the human observer found incoherent aliasing to be the most disturbing and noise the least. Artifact-PDM results were highly correlated to human observers in both experiments. Optimized CS reconstruction quality compared favorably to GRAPPA's for the same sampling ratio. CONCLUSIONS: We conclude our novel metric can faithfully represent human observer artifact evaluation and can be useful in evaluating CS and GRAPPA reconstruction algorithms, especially in studying artifact trade-offs.
Authors: Mark A Griswold; Peter M Jakob; Robin M Heidemann; Mathias Nittka; Vladimir Jellus; Jianmin Wang; Berthold Kiefer; Axel Haase Journal: Magn Reson Med Date: 2002-06 Impact factor: 4.668
Authors: Peter Hunold; Stefan Maderwald; Mark E Ladd; Vladimir Jellus; Jörg Barkhausen Journal: J Magn Reson Imaging Date: 2004-09 Impact factor: 4.813
Authors: Martin Blaimer; Felix Breuer; Matthias Mueller; Robin M Heidemann; Mark A Griswold; Peter M Jakob Journal: Top Magn Reson Imaging Date: 2004-08
Authors: Ali Gholipour; Onur Afacan; Iman Aganj; Benoit Scherrer; Sanjay P Prabhu; Mustafa Sahin; Simon K Warfield Journal: Med Phys Date: 2015-12 Impact factor: 4.071
Authors: Michael E Osadebey; Marius Pedersen; Douglas L Arnold; Katrina E Wendel-Mitoraj; For The Alzheimer's Disease Neuroimaging Initiative Journal: BMC Med Imaging Date: 2018-09-17 Impact factor: 1.930