Literature DB >> 30053326

Automated quality control assessment of clinical chest images.

Charles E Willis1, Thomas K Nishino1, Jered R Wells2, H Asher Ai1, Joshua M Wilson2, Ehsan Samei2,3.   

Abstract

PURPOSE: The purpose of this study was to determine whether a proposed suite of objective image quality metrics for digital chest radiographs is useful for monitoring image quality in a clinical setting unique from the one where the metrics were developed.
METHODS: Seventeen gridless AP chest radiographs from a GE Optima portable digital radiography (DR) unit ("sub-standard" images; Group 2) and 17 digital PA chest radiographs ("standard-of-care" images; Group 1) and 15 gridless (non-routine) PA chest radiographs (images with a gross technical error; Group 3) from a Discovery DR unit were chosen for analysis. Group 2 images were acquired with a lower kVp (100 vs 125) and shorter source-to-image distance (127 cm vs 183 cm) and were expected to have lower quality than Group 1 images. Group 3 images were expected to have degraded contrast vs Group 1 images. Images were anonymized and securely transferred to the Duke University Clinical Imaging Physics Group for analysis using software described and validated previously. Individual image quality was reported in terms of lung gray level, lung detail, lung noise, rib-lung contrast, rib sharpness, mediastinum detail, mediastinum noise, mediastinum alignment, subdiaphragm-lung contrast, and subdiaphragm area. Metrics were compared across groups. To improve precision of means and confidence intervals for routine exams, an additional 66 PA images were acquired, processed, and pooled with Group 1. Three observer studies were conducted to assess whether humans were able to identify images classified by the algorithm as abnormal.
RESULTS: Metrics agreed with published Quality Consistency Ranges with three exceptions: higher lung gray level, lower rib-lung contrast, and lower subdiaphragm-lung contrast. Higher (stored) bit depth (14 vs 12) accounted for higher lung gray level values in our images. Values were most internally consistent for Group 1. The most sensitive metric for distinguishing between groups was mediastinum noise, followed closely by lung noise. The least sensitive metrics were mediastinum detail and rib-lung contrast. The algorithm was more sensitive than human observers at detecting suboptimal diagnostic quality images.
CONCLUSIONS: The software appears promising for objectively and automatically identifying suboptimal images in a clinical imaging operation. The results can be used to establish local quality consistency ranges and action limits per facility preferences.
© 2018 American Association of Physicists in Medicine.

Entities:  

Keywords:  data analytics; detector performance; digital radiography; quality assurance; quality control

Mesh:

Year:  2018        PMID: 30053326     DOI: 10.1002/mp.13107

Source DB:  PubMed          Journal:  Med Phys        ISSN: 0094-2405            Impact factor:   4.071


  1 in total

1.  Medical physics 3.0 versus 1.0: A case study in digital radiography quality control.

Authors:  Diana E Carver; Charles E Willis; Paul J Stauduhar; Thomas K Nishino; Jered R Wells; Ehsan Samei
Journal:  J Appl Clin Med Phys       Date:  2018-08-17       Impact factor: 2.102

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.