Literature DB >> 23833035

Interrater variation in scoring radiological discrepancies.

B Mucci1, H Murray, A Downie, K Osborne.   

Abstract

OBJECTIVE: Discrepancy meetings are an important aspect of clinical governance. The Royal College of Radiologists has published advice on how to conduct meetings, suggesting that discrepancies are scored using the scale: 0=no error, 1=minor error, 2=moderate error and 3=major error. We have noticed variation in scores attributed to individual cases by radiologists and have sought to quantify the variation in scoring at our meetings.
METHODS: The scores from six discrepancy meetings totalling 161 scored events were collected. The reliability of scoring was measured using Fleiss' kappa, which calculates the degree of agreement in classification.
RESULTS: The number of cases rated at the six meetings ranged from 18 to 31 (mean 27). The number of raters ranged from 11 to 16 (mean 14). Only cases where all the raters scored were included in the analysis. The Fleiss' kappa statistic ranged from 0.12 to 0.20, and mean kappa was 0.17 for the six meetings.
CONCLUSION: A kappa of 1.0 indicates perfect agreement above chance and 0.0 indicates agreement equal to chance. A rule of thumb is that a kappa ≥0.70 indicates adequate interrater agreement. Our mean result of 0.172 shows poor agreement between scorers. This could indicate a problem with the scoring system or may indicate a need for more formal training and agreement in how scores are applied. ADVANCES IN KNOWLEDGE: Scoring of radiology discrepancies is highly subjective and shows poor interrater agreement.

Mesh:

Year:  2013        PMID: 23833035      PMCID: PMC3745061          DOI: 10.1259/bjr.20130245

Source DB:  PubMed          Journal:  Br J Radiol        ISSN: 0007-1285            Impact factor:   3.039


  17 in total

1.  The joint commission practice performance evaluation: a primer for radiologists.

Authors:  Joseph R Steele; David M Hovsepian; Don F Schomer
Journal:  J Am Coll Radiol       Date:  2010-06       Impact factor: 5.532

2.  Radiological error: analysis, standard setting, targeted instruction and teamworking.

Authors:  Richard FitzGerald
Journal:  Eur Radiol       Date:  2005-02-23       Impact factor: 5.315

3.  Accuracy of diagnostic procedures: has it improved over the past five decades?

Authors:  Leonard Berlin
Journal:  AJR Am J Roentgenol       Date:  2007-05       Impact factor: 3.959

4.  RADPEER quality assurance program: a multifacility study of interpretive disagreement rates.

Authors:  James P Borgstede; Rebecca S Lewis; Mythreyi Bhargavan; Jonathan H Sunshine
Journal:  J Am Coll Radiol       Date:  2004-01       Impact factor: 5.532

Review 5.  Radiologic errors and malpractice: a blurry distinction.

Authors:  Leonard Berlin
Journal:  AJR Am J Roentgenol       Date:  2007-09       Impact factor: 3.959

Review 6.  RADPEER scoring white paper.

Authors:  Valerie P Jackson; Trudie Cushing; Hani H Abujudeh; James P Borgstede; Kenneth W Chin; Charles K Grimes; David B Larson; Paul A Larson; Robert S Pyatt; William T Thorwarth
Journal:  J Am Coll Radiol       Date:  2009-01       Impact factor: 5.532

Review 7.  Peer review in diagnostic radiology: current state and a vision for the future.

Authors:  Shmuel Mahgerefteh; Jonathan B Kruskal; Chun S Yam; Arye Blachar; Jacob Sosna
Journal:  Radiographics       Date:  2009-06-29       Impact factor: 5.333

8.  A reference standard-based quality assurance program for radiology.

Authors:  Patrick T Liu; C Daniel Johnson; Rafael Miranda; Maitray D Patel; Carrie J Phillips
Journal:  J Am Coll Radiol       Date:  2010-01       Impact factor: 5.532

9.  The measurement of observer agreement for categorical data.

Authors:  J R Landis; G G Koch
Journal:  Biometrics       Date:  1977-03       Impact factor: 2.571

10.  Managing errors in radiology: a working model.

Authors:  C Melvin; R Bodley; A Booth; T Meagher; C Record; P Savage
Journal:  Clin Radiol       Date:  2004-09       Impact factor: 2.350

View more
  10 in total

1.  Blurred digital mammography images: an analysis of technical recall and observer detection performance.

Authors:  Wang Kei Ma; Rita Borgen; Judith Kelly; Sara Millington; Beverley Hilton; Rob Aspin; Carla Lança; Peter Hogg
Journal:  Br J Radiol       Date:  2017-01-30       Impact factor: 3.039

2.  Utility of contemporaneous dual read in the setting of emergency teleradiology reporting.

Authors:  Anjali Agrawal; D B Koundinya; Jayadeepa Srinivas Raju; Anurag Agrawal; Arjun Kalyanpur
Journal:  Emerg Radiol       Date:  2016-11-18

3.  PI-RADS version 2.1 scoring system is superior in detecting transition zone prostate cancer: a diagnostic study.

Authors:  Zhibing Wang; Wenlu Zhao; Junkang Shen; Zhen Jiang; Shuo Yang; Shuangxiu Tan; Yueyue Zhang
Journal:  Abdom Radiol (NY)       Date:  2020-09-09

4.  Interobserver agreement in the interpretation of outpatient head CT scans in an academic neuroradiology practice.

Authors:  G Guérin; S Jamali; C A Soto; F Guilbert; J Raymond
Journal:  AJNR Am J Neuroradiol       Date:  2014-07-24       Impact factor: 3.825

5.  Increasing neuroradiology exam volumes on-call do not result in increased major discrepancies in primary reads performed by residents.

Authors:  Jared T Verdoorn; Christopher H Hunt; Marianne T Luetmer; Christopher P Wood; Laurence J Eckel; Kara M Schwartz; Felix E Diehn; David F Kallmes
Journal:  Open Neuroimag J       Date:  2015-01-27

6.  Radiologist-initiated double reading of abdominal CT: retrospective analysis of the clinical importance of changes to radiology reports.

Authors:  Peter Mæhre Lauritzen; Jack Gunnar Andersen; Mali Victoria Stokke; Anne Lise Tennstrand; Rolf Aamodt; Thomas Heggelund; Fredrik A Dahl; Gunnar Sandbæk; Petter Hurlen; Pål Gulbrandsen
Journal:  BMJ Qual Saf       Date:  2016-03-24       Impact factor: 7.035

7.  Analysis of TaqMan Array Cards Data by an Assumption-Free Improvement of the maxRatio Algorithm Is More Accurate than the Cycle-Threshold Method.

Authors:  Luigi Marongiu; Eric Shain; Lydia Drumright; Reidun Lillestøl; Donald Somasunderam; Martin D Curran
Journal:  PLoS One       Date:  2016-11-09       Impact factor: 3.240

Review 8.  Optimizing Professional Practice Evaluation to Enable a Nonpunitive Learning Health System Approach to Peer Review.

Authors:  Christy I Sandborg; Gary E Hartman; Felice Su; Glyn Williams; Beate Teufe; Nina Wixson; David B Larson; Lane F Donnelly
Journal:  Pediatr Qual Saf       Date:  2020-12-28

9.  Automated vs. human evaluation of corneal staining.

Authors:  R Kourukmas; M Roth; G Geerling
Journal:  Graefes Arch Clin Exp Ophthalmol       Date:  2022-03-31       Impact factor: 3.535

10.  Development and Validation of a Standardized Tool for Prioritization of Information Sources.

Authors:  Holy Akwar; Harold Kloeze; Shamir Mukhi
Journal:  Online J Public Health Inform       Date:  2016-09-15
  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.