Literature DB >> 16569780

BI-RADS lexicon for US and mammography: interobserver variability and positive predictive value.

Elizabeth Lazarus1, Martha B Mainiero, Barbara Schepps, Susan L Koelliker, Linda S Livingston.   

Abstract

PURPOSE: To retrospectively evaluate interobserver variability between breast radiologists by using terminology of the fourth edition of the Breast Imaging Reporting and Data System (BI-RADS) to categorize lesions on mammograms and sonograms and to retrospectively determine the positive predictive value (PPV) of BI-RADS categories 4a, 4b, and 4c.
MATERIALS AND METHODS: Institutional review board approval was obtained; informed consent was not required. This study was HIPAA compliant. Ninety-four consecutive lesions in 91 women who underwent image-guided biopsy comprised 59 masses, 32 calcifications, and three masses with calcification. Five radiologists retrospectively reviewed these lesions. Each observer described each lesion with BI-RADS terminology and assigned a final BI-RADS category. Interobserver variability was assessed with the Cohen kappa statistic. A pathologic diagnosis was available for all 94 lesions; 30 (32%) were malignant and 64 (68%) were benign. Pathologic analysis of benign lesions was performed on tissue obtained with image-guided core-needle biopsy. In cases referred for excisional biopsy after needle biopsy because of atypia or discordance, final surgical pathologic analysis was used for correlation with imaging findings. PPV for category 4 or 5 lesions was determined for all readers combined.
RESULTS: For ultrasonographic (US) descriptors, substantial agreement was obtained for lesion orientation, shape, and boundary (kappa = 0.61, 0.66, and 0.69, respectively). Moderate agreement was obtained for lesion margin and posterior acoustic features (kappa = 0.40 for both). Fair agreement was obtained for lesion echo pattern (kappa = 0.29). For mammographic descriptors, moderate agreement was obtained for mass shape, mass margin, and calcification distribution (kappa = 0.48, 0.48, and 0.50, respectively). Fair agreement was obtained for calcification description (kappa = 0.32). Slight agreement was obtained for mass density (kappa = 0.18). Fair agreement was obtained for final assessment category (kappa = 0.28). PPVs of BI-RADS category 4 and 5 assignments were as follows: category 4a, six (6%) of 102; category 4b, 17 (15%) of 110; category 4c, 48 (53%) of 91; and category 5, 71 (91%) of 78.
CONCLUSION: Interobserver agreement with the new BI-RADS terminology is good and validates the US lexicon. Subcategories 4a, 4b, and 4c are useful in predicting the likelihood of malignancy. (c) RSNA, 2006.

Entities:  

Mesh:

Year:  2006        PMID: 16569780     DOI: 10.1148/radiol.2392042127

Source DB:  PubMed          Journal:  Radiology        ISSN: 0033-8419            Impact factor:   11.105


  126 in total

1.  US-guided diffuse optical tomography for breast lesions: the reliability of clinical experience.

Authors:  Min Jung Kim; Ji Youn Kim; Jung Hyun Youn; Myung Hyun Kim; Hye Ryoung Koo; Soo Jin Kim; Yu-Mee Sohn; Hee Jung Moon; Eun-Kyung Kim
Journal:  Eur Radiol       Date:  2011-01-28       Impact factor: 5.315

2.  The practical application of the UK 5-point scoring system for breast imaging: how standardisation of reporting supports the multidisciplinary team.

Authors:  L S Wilkinson; N T F Ridley
Journal:  Br J Radiol       Date:  2011-11       Impact factor: 3.039

3.  A comparison of logistic regression analysis and an artificial neural network using the BI-RADS lexicon for ultrasonography in conjunction with introbserver variability.

Authors:  Sun Mi Kim; Heon Han; Jeong Mi Park; Yoon Jung Choi; Hoi Soo Yoon; Jung Hee Sohn; Moon Hee Baek; Yoon Nam Kim; Young Moon Chae; Jeon Jong June; Jiwon Lee; Yong Hwan Jeon
Journal:  J Digit Imaging       Date:  2012-10       Impact factor: 4.056

4.  Individualized computer-aided education in mammography based on user modeling: concept and preliminary experiments.

Authors:  Maciej A Mazurowski; Jay A Baker; Huiman X Barnhart; Georgia D Tourassi
Journal:  Med Phys       Date:  2010-03       Impact factor: 4.071

5.  Computer-aided classification of breast masses: performance and interobserver variability of expert radiologists versus residents.

Authors:  Swatee Singh; Jeff Maxwell; Jay A Baker; Jennifer L Nicholas; Joseph Y Lo
Journal:  Radiology       Date:  2010-10-22       Impact factor: 11.105

6.  External validation of a publicly available computer assisted diagnostic tool for mammographic mass lesions with two high prevalence research datasets.

Authors:  Matthias Benndorf; Elizabeth S Burnside; Christoph Herda; Mathias Langer; Elmar Kotter
Journal:  Med Phys       Date:  2015-08       Impact factor: 4.071

7.  Lexicon for standardized interpretation of gamma camera molecular breast imaging: observer agreement and diagnostic accuracy.

Authors:  Amy Lynn Conners; Carrie B Hruska; Cindy L Tortorelli; Robert W Maxwell; Deborah J Rhodes; Judy C Boughey; Wendie A Berg
Journal:  Eur J Nucl Med Mol Imaging       Date:  2012-06       Impact factor: 9.236

8.  Observer variability in screen-film mammography versus full-field digital mammography with soft-copy reading.

Authors:  Per Skaane; Felix Diekmann; Corinne Balleyguier; Susanne Diekmann; Jean-Charles Piguet; Kari Young; Michael Abdelnoor; Loren Niklason
Journal:  Eur Radiol       Date:  2008-02-27       Impact factor: 5.315

9.  Suspicious breast calcifications undergoing stereotactic biopsy in women ages 70 and over: Breast cancer incidence by BI-RADS descriptors.

Authors:  Lars J Grimm; David Y Johnson; Karen S Johnson; Jay A Baker; Mary Scott Soo; E Shelley Hwang; Sujata V Ghate
Journal:  Eur Radiol       Date:  2016-10-17       Impact factor: 5.315

10.  Breast cancer risk prediction and mammography biopsy decisions: a model-based study.

Authors:  Katrina Armstrong; Elizabeth A Handorf; Jinbo Chen; Mirar N Bristol Demeter
Journal:  Am J Prev Med       Date:  2013-01       Impact factor: 5.043

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.