Literature DB >> 26892609

Performance results for a workstation-integrated radiology peer review quality assurance program.

Margaret M O'Keeffe1, Todd M Davis2, Kerry Siminoski1.   

Abstract

OBJECTIVE: To assess review completion rates, RADPEER score distribution, and sources of disagreement when using a workstation-integrated radiology peer review program, and to evaluate radiologist perceptions of the program.
DESIGN: Retrospective review of prospectively collected data.
SETTING: Large private outpatient radiology practice. PARTICIPANTS: Radiologists (n = 66) with a mean of 16.0 (standard deviation, 9.2) years of experience.
INTERVENTIONS: Prior studies and reports of cases being actively reported were randomly selected for peer review using the RADPEER scoring system (a 4-point scale, with a score of 1 indicating agreement and scores of 2-4 indicating increasing levels of disagreement). MAIN OUTCOME MEASURES: Assigned peer review completion rates, review scores, sources of disagreement and radiologist survey responses.
RESULTS: Of 31 293 assigned cases, 29 044 (92.8%; 95% CI 92.5-93.1%) were reviewed. Discrepant scores (score = 2, 3 or 4) were given in 0.69% (95% CI 0.60-0.79%) of cases and clinically significant discrepancy (score = 3 or 4) was assigned in 0.42% (95% CI 0.35-0.50%). The most common cause of disagreement was missed diagnosis (75.2%; 95% CI 66.8-82.1%). By anonymous survey, 94% of radiologists felt that peer review was worthwhile, 90% reported that the scores they received were appropriate and 78% felt that the received feedback was valuable.
CONCLUSION: Workstation-based peer review can increase completion rates and levels of radiologist acceptance while producing RADPEER scores similar to those previously reported. This approach may be one way to increase radiologist engagement in peer review quality assurance.
© The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

Entities:  

Keywords:  consensus methods; human factors; medical errors; peer assessment; quality improvement; quality indicators

Mesh:

Year:  2016        PMID: 26892609     DOI: 10.1093/intqhc/mzw017

Source DB:  PubMed          Journal:  Int J Qual Health Care        ISSN: 1353-4505            Impact factor:   2.038


  3 in total

1.  Implementation and Validation of PACS Integrated Peer Review for Discrepancy Recording of Radiology Reporting.

Authors:  A W Olthof; P M A van Ooijen
Journal:  J Med Syst       Date:  2016-07-21       Impact factor: 4.460

2.  Quality Assurance of a Cross-Border and Sub-Specialized Teleradiology Service.

Authors:  Szabolcs Hetenyi; Leonie Goelz; Alexander Boehmcker; Carlos Schorlemmer
Journal:  Healthcare (Basel)       Date:  2022-05-28

Review 3.  Added value of double reading in diagnostic radiology,a systematic review.

Authors:  Håkan Geijer; Mats Geijer
Journal:  Insights Imaging       Date:  2018-03-28
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.