BACKGROUND: Previous studies have shown that the agreement among radiologists interpreting a test set of mammograms is relatively low. However, data available from real-world settings are sparse. We studied mammographic examination interpretations by radiologists practicing in a community setting and evaluated whether the variability in false-positive rates could be explained by patient, radiologist, and/or testing characteristics. METHODS: We used medical records on randomly selected women aged 40-69 years who had had at least one screening mammographic examination in a community setting between January 1, 1985, and June 30, 1993. Twenty-four radiologists interpreted 8734 screening mammograms from 2169 women. Hierarchical logistic regression models were used to examine the impact of patient, radiologist, and testing characteristics. All statistical tests were two-sided. RESULTS: Radiologists varied widely in mammographic examination interpretations, with a mass noted in 0%-7.9%, calcification in 0%-21.3%, and fibrocystic changes in 1.6%-27.8% of mammograms read. False-positive rates ranged from 2.6% to 15.9%. Younger and more recently trained radiologists had higher false-positive rates. Adjustment for patient, radiologist, and testing characteristics narrowed the range of false-positive rates to 3.5%-7.9%. If a woman went to two randomly selected radiologists, her odds, after adjustment, of having a false-positive reading would be 1.5 times greater for the radiologist at higher risk of a false-positive reading, compared with the radiologist at lowest risk (95% highest posterior density interval [similar to a confidence interval] = 1.17 to 2.08). CONCLUSION: Community radiologists varied widely in their false-positive rates in screening mammograms; this variability range was reduced by half, but not eliminated, after statistical adjustment for patient, radiologist, and testing characteristics. These characteristics need to be considered when evaluating false-positive rates in community mammographic examination screening.
BACKGROUND: Previous studies have shown that the agreement among radiologists interpreting a test set of mammograms is relatively low. However, data available from real-world settings are sparse. We studied mammographic examination interpretations by radiologists practicing in a community setting and evaluated whether the variability in false-positive rates could be explained by patient, radiologist, and/or testing characteristics. METHODS: We used medical records on randomly selected women aged 40-69 years who had had at least one screening mammographic examination in a community setting between January 1, 1985, and June 30, 1993. Twenty-four radiologists interpreted 8734 screening mammograms from 2169 women. Hierarchical logistic regression models were used to examine the impact of patient, radiologist, and testing characteristics. All statistical tests were two-sided. RESULTS: Radiologists varied widely in mammographic examination interpretations, with a mass noted in 0%-7.9%, calcification in 0%-21.3%, and fibrocystic changes in 1.6%-27.8% of mammograms read. False-positive rates ranged from 2.6% to 15.9%. Younger and more recently trained radiologists had higher false-positive rates. Adjustment for patient, radiologist, and testing characteristics narrowed the range of false-positive rates to 3.5%-7.9%. If a woman went to two randomly selected radiologists, her odds, after adjustment, of having a false-positive reading would be 1.5 times greater for the radiologist at higher risk of a false-positive reading, compared with the radiologist at lowest risk (95% highest posterior density interval [similar to a confidence interval] = 1.17 to 2.08). CONCLUSION: Community radiologists varied widely in their false-positive rates in screening mammograms; this variability range was reduced by half, but not eliminated, after statistical adjustment for patient, radiologist, and testing characteristics. These characteristics need to be considered when evaluating false-positive rates in community mammographic examination screening.
Authors: C L Christiansen; F Wang; M B Barton; W Kreuter; J G Elmore; A E Gelfand; S W Fletcher Journal: J Natl Cancer Inst Date: 2000-10-18 Impact factor: 13.506
Authors: K Kerlikowske; D Grady; J Barclay; S D Frankel; S H Ominsky; E A Sickles; V Ernster Journal: J Natl Cancer Inst Date: 1998-12-02 Impact factor: 13.506
Authors: Patricia A Carney; Linn Abraham; Andrea Cook; Stephen A Feig; Edward A Sickles; Diana L Miglioretti; Berta M Geller; Bonnie C Yankaskas; Joann G Elmore Journal: Acad Radiol Date: 2012-06-23 Impact factor: 3.173
Authors: Jacqueline R Halladay; Bonnie C Yankaskas; J Michael Bowling; Camille Alexander Journal: AJR Am J Roentgenol Date: 2010-09 Impact factor: 3.959
Authors: Joann G Elmore; Andrea J Cook; Andy Bogart; Patricia A Carney; Berta M Geller; Stephen H Taplin; Diana S M Buist; Tracy Onega; Christoph I Lee; Diana L Miglioretti Journal: Clin Imaging Date: 2016-07-01 Impact factor: 1.605
Authors: Solveig Hofvind; Per Skaane; Joann G Elmore; Sofie Sebuødegård; Solveig Roth Hoff; Christoph I Lee Journal: Radiology Date: 2014-04-01 Impact factor: 11.105
Authors: Patricia A Carney; Andrea J Cook; Diana L Miglioretti; Stephen A Feig; Erin Aiello Bowles; Berta M Geller; Karla Kerlikowske; Mark Kettler; Tracy Onega; Joann G Elmore Journal: J Clin Epidemiol Date: 2011-10-15 Impact factor: 6.437
Authors: Rebecca A Hubbard; Karla Kerlikowske; Chris I Flowers; Bonnie C Yankaskas; Weiwei Zhu; Diana L Miglioretti Journal: Ann Intern Med Date: 2011-10-18 Impact factor: 25.391
Authors: Solveig Hofvind; Pamela M Vacek; Joan Skelly; Donald L Weaver; Berta M Geller Journal: J Natl Cancer Inst Date: 2008-07-29 Impact factor: 13.506