PURPOSE: To retrospectively compare computer-aided mammographic density estimation (MDEST) with radiologist estimates of percentage density and Breast Imaging Reporting and Data System (BI-RADS) density classification. MATERIALS AND METHODS: Institutional Review Board approval was obtained for this HIPAA-compliant study; patient informed consent requirements were waived. A fully automated MDEST computer program was used to measure breast density on digitized mammograms in 65 women (mean age, 53 years; range, 24-89 years). Pixel gray levels in detected breast borders were analyzed, and dense areas were segmented. Percentage density was calculated by dividing the number of dense pixels by the total number of pixels within the borders. Seven breast radiologists (five trained with MDEST, two not trained) prospectively assigned qualitative BI-RADS density categories and visually estimated percentage density on 260 mammograms. Qualitative BI-RADS assessments were compared with new quantitative BI-RADS standards. The reference standard density for this study was established by allowing the five trained radiologists to manipulate the MDEST gray-level thresholds, which segmented mammograms into dense and nondense areas. Statistical tests performed include Pearson correlation coefficients, Bland-Altman agreement method, kappa statistics, and unpaired t tests. RESULTS: There was a close correlation between the reference standard and radiologist-estimated density (R = 0.90-0.95) and MDEST density (R = 0.89). Untrained radiologists overestimated percentage density by an average of 37%, versus 6% for trained radiologists (P < .001). MDEST showed better agreement with the reference standard (average overestimate, 1%; range, -15% to +18%). MDEST correlated better with percentage density than with qualitative BI-RADS categories. There were large overlaps and ranges of percentage density in qualitative BI-RADS categories 2-4. Qualitative BI-RADS categories correlated poorly with new quantitative BI-RADS categories, and 16 (6%) of 260 views were erroneously classified by MDEST. CONCLUSION: MDEST compared favorably with radiologist estimates of percentage density and is more reproducible than radiologist estimates when qualitative BI-RADS density categories are used. Qualitative and quantitative BI-RADS density assessments differed markedly. (c) RSNA, 2006.
PURPOSE: To retrospectively compare computer-aided mammographic density estimation (MDEST) with radiologist estimates of percentage density and Breast Imaging Reporting and Data System (BI-RADS) density classification. MATERIALS AND METHODS: Institutional Review Board approval was obtained for this HIPAA-compliant study; patient informed consent requirements were waived. A fully automated MDEST computer program was used to measure breast density on digitized mammograms in 65 women (mean age, 53 years; range, 24-89 years). Pixel gray levels in detected breast borders were analyzed, and dense areas were segmented. Percentage density was calculated by dividing the number of dense pixels by the total number of pixels within the borders. Seven breast radiologists (five trained with MDEST, two not trained) prospectively assigned qualitative BI-RADS density categories and visually estimated percentage density on 260 mammograms. Qualitative BI-RADS assessments were compared with new quantitative BI-RADS standards. The reference standard density for this study was established by allowing the five trained radiologists to manipulate the MDEST gray-level thresholds, which segmented mammograms into dense and nondense areas. Statistical tests performed include Pearson correlation coefficients, Bland-Altman agreement method, kappa statistics, and unpaired t tests. RESULTS: There was a close correlation between the reference standard and radiologist-estimated density (R = 0.90-0.95) and MDEST density (R = 0.89). Untrained radiologists overestimated percentage density by an average of 37%, versus 6% for trained radiologists (P < .001). MDEST showed better agreement with the reference standard (average overestimate, 1%; range, -15% to +18%). MDEST correlated better with percentage density than with qualitative BI-RADS categories. There were large overlaps and ranges of percentage density in qualitative BI-RADS categories 2-4. Qualitative BI-RADS categories correlated poorly with new quantitative BI-RADS categories, and 16 (6%) of 260 views were erroneously classified by MDEST. CONCLUSION: MDEST compared favorably with radiologist estimates of percentage density and is more reproducible than radiologist estimates when qualitative BI-RADS density categories are used. Qualitative and quantitative BI-RADS density assessments differed markedly. (c) RSNA, 2006.
Authors: Hui Li; William A Weiss; Milica Medved; Hiroyuki Abe; Gillian M Newstead; Gregory S Karczmar; Maryellen L Giger Journal: J Med Imaging (Bellingham) Date: 2016-12-28
Authors: Songfeng Li; Jun Wei; Heang-Ping Chan; Mark A Helvie; Marilyn A Roubidoux; Yao Lu; Chuan Zhou; Lubomir M Hadjiiski; Ravi K Samala Journal: Phys Med Biol Date: 2018-01-09 Impact factor: 3.609
Authors: Ke Nie; Jeon-Hor Chen; Siwa Chan; Man-Kwun I Chau; Hon J Yu; Shadfar Bahri; Tiffany Tseng; Orhan Nalcioglu; Min-Ying Su Journal: Med Phys Date: 2008-12 Impact factor: 4.071
Authors: Reena S Cecchini; Joseph P Costantino; Jane A Cauley; Walter M Cronin; D Lawrence Wickerham; Hanna Bandos; Joel L Weissfeld; Norman Wolmark Journal: Cancer Prev Res (Phila) Date: 2012-10-11
Authors: Yanpeng Li; Patrick C Brennan; Warwick Lee; Carolyn Nickson; Mariusz W Pietrzyk; Elaine A Ryan Journal: J Digit Imaging Date: 2015-10 Impact factor: 4.056