Literature DB >> 28551400

Correlation Between Screening Mammography Interpretive Performance on a Test Set and Performance in Clinical Practice.

Diana L Miglioretti1, Laura Ichikawa2, Robert A Smith3, Diana S M Buist2, Patricia A Carney4, Berta Geller5, Barbara Monsees6, Tracy Onega7, Robert Rosenberg8, Edward A Sickles9, Bonnie C Yankaskas10, Karla Kerlikowske11.   

Abstract

RATIONALE AND
OBJECTIVES: Evidence is inconsistent about whether radiologists' interpretive performance on a screening mammography test set reflects their performance in clinical practice. This study aimed to estimate the correlation between test set and clinical performance and determine if the correlation is influenced by cancer prevalence or lesion difficulty in the test set.
MATERIALS AND METHODS: This institutional review board-approved study randomized 83 radiologists from six Breast Cancer Surveillance Consortium registries to assess one of four test sets of 109 screening mammograms each; 48 radiologists completed a fifth test set of 110 mammograms 2 years later. Test sets differed in number of cancer cases and difficulty of lesion detection. Test set sensitivity and specificity were estimated using woman-level and breast-level recall with cancer status and expert opinion as gold standards. Clinical performance was estimated using women-level recall with cancer status as the gold standard. Spearman rank correlations between test set and clinical performance with 95% confidence intervals (CI) were estimated.
RESULTS: For test sets with fewer cancers (N = 15) that were more difficult to detect, correlations were weak to moderate for sensitivity (woman level = 0.46, 95% CI = 0.16, 0.69; breast level = 0.35, 95% CI = 0.03, 0.61) and weak for specificity (0.24, 95% CI = 0.01, 0.45) relative to expert recall. Correlations for test sets with more cancers (N = 30) were close to 0 and not statistically significant.
CONCLUSIONS: Correlations between screening performance on a test set and performance in clinical practice are not strong. Test set performance more accurately reflects performance in clinical practice if cancer prevalence is low and lesions are challenging to detect.
Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Screening mammography; interpretive performance; test sets

Mesh:

Year:  2017        PMID: 28551400      PMCID: PMC5591765          DOI: 10.1016/j.acra.2017.03.016

Source DB:  PubMed          Journal:  Acad Radiol        ISSN: 1076-6332            Impact factor:   3.173


  18 in total

1.  Assessing mammographers' accuracy. A comparison of clinical and test performance.

Authors:  C M Rutter; S Taplin
Journal:  J Clin Epidemiol       Date:  2000-05       Impact factor: 6.437

2.  Marginal modeling of multilevel binary data with time-varying covariates.

Authors:  Diana L Miglioretti; Patrick J Heagerty
Journal:  Biostatistics       Date:  2004-07       Impact factor: 5.899

3.  Self-assessment in lifelong learning and improving performance in practice: physician know thyself.

Authors:  F Daniel Duffy; Eric S Holmboe
Journal:  JAMA       Date:  2006-09-06       Impact factor: 56.272

4.  A portrait of breast imaging specialists and of the interpretation of mammography in the United States.

Authors:  Rebecca S Lewis; Jonathan H Sunshine; Mythreyi Bhargavan
Journal:  AJR Am J Roentgenol       Date:  2006-11       Impact factor: 3.959

5.  Educational interventions to improve screening mammography interpretation: a randomized controlled trial.

Authors:  Berta M Geller; Andy Bogart; Patricia A Carney; Edward A Sickles; Robert Smith; Barbara Monsees; Lawrence W Bassett; Diana M Buist; Karla Kerlikowske; Tracy Onega; Bonnie C Yankaskas; Sebastien Haneuse; Deirdre Hill; Matthew G Wallis; Diana Miglioretti
Journal:  AJR Am J Roentgenol       Date:  2014-06       Impact factor: 3.959

6.  Effect of radiologists' diagnostic work-up volume on interpretive performance.

Authors:  Diana S M Buist; Melissa L Anderson; Robert A Smith; Patricia A Carney; Diana L Miglioretti; Barbara S Monsees; Edward A Sickles; Stephen H Taplin; Berta M Geller; Bonnie C Yankaskas; Tracy L Onega
Journal:  Radiology       Date:  2014-06-24       Impact factor: 11.105

7.  Certain performance values arising from mammographic test set readings correlate well with clinical audit.

Authors:  BaoLin Pauline Soh; Warwick Bruce Lee; Claudia Mello-Thoms; Kriscia Tapia; John Ryan; Wai Tak Hung; Graham Thompson; Rob Heard; Patrick Brennan
Journal:  J Med Imaging Radiat Oncol       Date:  2015-04-01       Impact factor: 1.735

8.  Association between time spent interpreting, level of confidence, and accuracy of screening mammography.

Authors:  Patricia A Carney; T Andrew Bogart; Berta M Geller; Sebastian Haneuse; Karla Kerlikowske; Diana S M Buist; Robert Smith; Robert Rosenberg; Bonnie C Yankaskas; Tracy Onega; Diana L Miglioretti
Journal:  AJR Am J Roentgenol       Date:  2012-04       Impact factor: 3.959

9.  Variability in interpretive performance at screening mammography and radiologists' characteristics associated with accuracy.

Authors:  Joann G Elmore; Sara L Jackson; Linn Abraham; Diana L Miglioretti; Patricia A Carney; Berta M Geller; Bonnie C Yankaskas; Karla Kerlikowske; Tracy Onega; Robert D Rosenberg; Edward A Sickles; Diana S M Buist
Journal:  Radiology       Date:  2009-10-28       Impact factor: 11.105

10.  If you don't find it often, you often don't find it: why some cancers are missed in breast cancer screening.

Authors:  Karla K Evans; Robyn L Birdwell; Jeremy M Wolfe
Journal:  PLoS One       Date:  2013-05-30       Impact factor: 3.240

View more
  3 in total

1.  Mammography self-evaluation online test for screening readers: an Italian Society of Medical Radiology (SIRM) initiative.

Authors:  Beniamino Brancato; Francesca Peruzzi; Calogero Saieva; Simone Schiaffino; Sandra Catarzi; Gabriella Gemma Risso; Andrea Cozzi; Serena Carriero; Massimo Calabrese; Stefania Montemezzi; Chiara Zuiani; Francesco Sardanelli
Journal:  Eur Radiol       Date:  2021-09-04       Impact factor: 5.315

Review 2.  Fatigue in radiology: a fertile area for future research.

Authors:  Sian Taylor-Phillips; Chris Stinton
Journal:  Br J Radiol       Date:  2019-05-14       Impact factor: 3.039

3.  Artificial intelligence in breast cancer screening: primary care provider preferences.

Authors:  Nathaniel Hendrix; Brett Hauber; Christoph I Lee; Aasthaa Bansal; David L Veenstra
Journal:  J Am Med Inform Assoc       Date:  2021-06-12       Impact factor: 4.497

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.