Literature DB >> 16847870

Performance assessment for radiologists interpreting screening mammography.

D B Woodard1, A E Gelfand, W E Barlow, J G Elmore.   

Abstract

When interpreting screening mammograms radiologists decide whether suspicious abnormalities exist that warrant the recall of the patient for further testing. Previous work has found significant differences in interpretation among radiologists; their false-positive and false-negative rates have been shown to vary widely. Performance assessments of individual radiologists have been mandated by the U.S. government, but concern exists about the adequacy of current assessment techniques. We use hierarchical modelling techniques to infer about interpretive performance of individual radiologists in screening mammography. While doing this we account for differences due to patient mix and radiologist attributes (for instance, years of experience or interpretive volume). We model at the mammogram level, and then use these models to assess radiologist performance. Our approach is demonstrated with data from mammography registries and radiologist surveys. For each mammogram, the registries record whether or not the woman was found to have breast cancer within one year of the mammogram; this criterion is used to determine whether the recall decision was correct. We model the false-positive rate and the false-negative rate separately using logistic regression on patient risk factors and radiologist random effects. The radiologist random effects are, in turn, regressed on radiologist attributes such as the number of years in practice. Using these Bayesian hierarchical models we examine several radiologist performance metrics. The first is the difference between the false-positive or false-negative rate of a particular radiologist and that of a hypothetical 'standard' radiologist with the same attributes and the same patient mix. A second metric predicts the performance of each radiologist on hypothetical mammography exams with particular combinations of patient risk factors (which we characterize as 'typical', 'high-risk', or 'low-risk'). The second metric can be used to compare one radiologist to another, while the first metric addresses how the radiologist is performing compared to an appropriate standard. Interval estimates are given for the metrics, thereby addressing uncertainty. The particular novelty in our contribution is to estimate multiple performance rates (sensitivity and specificity). One can even estimate a continuum of performance rates such as a performance curve or ROC curve using our models and we describe how this may be done. In addition to assessing radiologists in the original data set, we also show how to infer about the performance of a new radiologist with new case mix, new outcome data, and new attributes without having to refit the model. Copyright (c) 2006 John Wiley & Sons, Ltd.

Entities:  

Mesh:

Year:  2007        PMID: 16847870      PMCID: PMC3152258          DOI: 10.1002/sim.2633

Source DB:  PubMed          Journal:  Stat Med        ISSN: 0277-6715            Impact factor:   2.373


  9 in total

1.  Physician clinical performance assessment: prospects and barriers.

Authors:  Bruce E Landon; Sharon-Lise T Normand; David Blumenthal; Jennifer Daley
Journal:  JAMA       Date:  2003-09-03       Impact factor: 56.272

2.  Accuracy of screening mammography interpretation by characteristics of radiologists.

Authors:  William E Barlow; Chen Chi; Patricia A Carney; Stephen H Taplin; Carl D'Orsi; Gary Cutter; R Edward Hendrick; Joann G Elmore
Journal:  J Natl Cancer Inst       Date:  2004-12-15       Impact factor: 13.506

3.  Examining accuracy of screening mammography using an event order model.

Authors:  Prashni Paliwal; Alan E Gelfand; Linn Abraham; William Barlow; Joann G Elmore
Journal:  Stat Med       Date:  2006-01-30       Impact factor: 2.373

4.  Breast Cancer Surveillance Consortium: a national mammography screening and outcomes database.

Authors:  R Ballard-Barbash; S H Taplin; B C Yankaskas; V L Ernster; R D Rosenberg; P A Carney; W E Barlow; B M Geller; K Kerlikowske; B K Edwards; C F Lynch; N Urban; C A Chrvala; C R Key; S P Poplack; J K Worden; L G Kessler
Journal:  AJR Am J Roentgenol       Date:  1997-10       Impact factor: 3.959

5.  Widespread assessment of risk-adjusted outcomes: lessons from local initiatives.

Authors:  L I Iezzoni; L G Greenberg
Journal:  Jt Comm J Qual Improv       Date:  1994-06

6.  Judging hospitals by severity-adjusted mortality rates: the case of CABG surgery.

Authors:  B Landon; L I Iezzoni; A S Ash; M Shwartz; J Daley; J S Hughes; Y D Mackiernan
Journal:  Inquiry       Date:  1996       Impact factor: 1.730

7.  Individual and combined effects of age, breast density, and hormone replacement therapy use on the accuracy of screening mammography.

Authors:  Patricia A Carney; Diana L Miglioretti; Bonnie C Yankaskas; Karla Kerlikowske; Robert Rosenberg; Carolyn M Rutter; Berta M Geller; Linn A Abraham; Steven H Taplin; Mark Dignan; Gary Cutter; Rachel Ballard-Barbash
Journal:  Ann Intern Med       Date:  2003-02-04       Impact factor: 25.391

8.  The case for case-mix adjustment in practice profiling. When good apples look bad.

Authors:  S Salem-Schatz; G Moore; M Rucker; S D Pearson
Journal:  JAMA       Date:  1994-09-21       Impact factor: 56.272

9.  International variation in screening mammography interpretations in community-based programs.

Authors:  Joann G Elmore; Connie Y Nakano; Thomas D Koepsell; Laurel M Desnick; Carl J D'Orsi; David F Ransohoff
Journal:  J Natl Cancer Inst       Date:  2003-09-17       Impact factor: 13.506

  9 in total
  7 in total

1.  Joint modeling of sensitivity and specificity.

Authors:  Gavino Puggioni; Alan E Gelfand; Joann G Elmore
Journal:  Stat Med       Date:  2008-05-10       Impact factor: 2.373

2.  Effect of radiologist experience on the risk of false-positive results in breast cancer screening programs.

Authors:  Raquel Zubizarreta Alberdi; Ana B Fernández Llanes; Raquel Almazán Ortega; Rubén Roman Expósito; Jose M Velarde Collado; Teresa Queiro Verdes; Carmen Natal Ramos; María Ederra Sanz; Dolores Salas Trejo; Xavier Castells Oliveres
Journal:  Eur Radiol       Date:  2011-06-04       Impact factor: 5.315

3.  Development and Assessment of a New Global Mammographic Image Feature Analysis Scheme to Predict Likelihood of Malignant Cases.

Authors:  Morteza Heidari; Seyedehnafiseh Mirniaharikandehei; Wei Liu; Alan B Hollingsworth; Hong Liu; Bin Zheng
Journal:  IEEE Trans Med Imaging       Date:  2019-10-09       Impact factor: 10.048

4.  Classification accuracy of claims-based methods for identifying providers failing to meet performance targets.

Authors:  Rebecca A Hubbard; Rhondee Benjamin-Johnson; Tracy Onega; Rebecca Smith-Bindman; Weiwei Zhu; Joshua J Fenton
Journal:  Stat Med       Date:  2014-10-10       Impact factor: 2.373

5.  Radiologists can detect the 'gist' of breast cancer before any overt signs of cancer appear.

Authors:  Patrick C Brennan; Ziba Gandomkar; Ernest U Ekpo; Kriscia Tapia; Phuong D Trieu; Sarah J Lewis; Jeremy M Wolfe; Karla K Evans
Journal:  Sci Rep       Date:  2018-06-07       Impact factor: 4.379

Review 6.  Methods Used in Computer-Aided Diagnosis for Breast Cancer Detection Using Mammograms: A Review.

Authors:  Saleem Z Ramadan
Journal:  J Healthc Eng       Date:  2020-03-12       Impact factor: 2.682

7.  Advancing radiology through informed leadership: summary of the proceedings of the Seventh Biannual Symposium of the International Society for Strategic Studies in Radiology (IS(3)R), 23-25 August 2007.

Authors:  Ada Muellner; Gary M Glazer; Maximilian F Reiser; William G Bradley; Gabriel P Krestin; Hedvig Hricak; James H Thrall
Journal:  Eur Radiol       Date:  2009-03-11       Impact factor: 5.315

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.