E Giannotti1, S Waugh2, L Priba3, Z Davis4, E Crowe5, S Vinnicombe6. 1. Breast Imaging Department, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK. Electronic address: ytteb84@hotmail.com. 2. Department of Medical Physics, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK; Department of Clinical Radiology, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK. Electronic address: shelley.waugh@nhs.net. 3. Department of Medical Physics, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK. Electronic address: lpriba@nhs.net. 4. Breast Imaging Department, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK. Electronic address: zoedavis@doctors.org.uk. 5. Department of Clinical Radiology, Ninewells Hospital and Medical School, Dundee DD1 9SY, UK. Electronic address: e.crowe@nhs.net. 6. Division of Imaging and Technology, Ninewells Hospital and Medical School, University of Dundee, Dundee DD1 9SY, UK. Electronic address: s.vinnicombe@dundee.ac.uk.
Abstract
PURPOSE: Apparent Diffusion Coefficient (ADC) measurements are increasingly used for assessing breast cancer response to neoadjuvant chemotherapy although little data exists on ADC measurement reproducibility. The purpose of this work was to investigate and characterise the magnitude of errors in ADC measures that may be encountered in such follow-up studies- namely scanner stability, scan-scan reproducibility, inter- and intra- observer measures and the most reproducible measurement of ADC. METHODS: Institutional Review Board approval was obtained for the prospective study of healthy volunteers and written consent acquired for the retrospective study of patient images. All scanning was performed on a 3.0-T MRI scanner. Scanner stability was assessed using an ice-water phantom weekly for 12 weeks. Inter-scan repeatability was assessed across two scans of 10 healthy volunteers (26-61 years; mean: 44.7 years). Inter- and intra-reader analysis repeatability was measured in 52 carcinomas from clinical patients (29-70 years; mean: 50.0 years) by measuring the whole tumor ADC value on a single slice with maximum tumor diameter (ADCS) and the ADC value of a small region of interest (ROI) on the same slice (ADCmin). Repeatability was assessed using intraclass correlation coefficients (ICC) and coefficients of repeatability (CoR). RESULTS: Scanner stability contributed 6% error to phantom ADC measurements (0.071×10(-3)mm(2)/s; mean ADC=1.089×10(-3)mm(2)/s). The measured scan-scan CoR in the volunteers was 0.122×10(-3)mm(2)/s, contributing an error of 8% to the mean measured values (ADCscan1=1.529×10(-3)mm(2)/s; ADCscan2=1.507×10(-3)mm(2)/s). Technical and clinical observers demonstrated excellent intra-observer repeatability (ICC>0.9). Clinical observer CoR values were marginally better than technical observer measures (ADCS=0.035×10(-3)mm(2)/s vs. 0.097×10(-3)mm(2)/s; ADCmin=0.09×10(-3)mm(2)/s vs. 0.114×10(-3)mm(2)/s). Inter-reader ICC values were good 0.864 (ADCS) and fair 0.677 (ADCmin). Corresponding CoR values were 0.202×10(-3)mm(2)/s and 0.264×10(-3)mm(2)/s, respectively. CONCLUSIONS: Both scanner stability and scan-scan variation have minimal influence on breast ADC measurements, contributing less than 10% error of average measured ADC values. Measurement of ADC values from a small ROI contributes a greater variability in measurements compared with measurement of ADC across the whole visible tumor on one slice. The greatest source of error in follow-up studies is likely to be associated with measures made by multiple observers, and this should be considered where multiple measures are required to assess response to treatment.
PURPOSE: Apparent Diffusion Coefficient (ADC) measurements are increasingly used for assessing breast cancer response to neoadjuvant chemotherapy although little data exists on ADC measurement reproducibility. The purpose of this work was to investigate and characterise the magnitude of errors in ADC measures that may be encountered in such follow-up studies- namely scanner stability, scan-scan reproducibility, inter- and intra- observer measures and the most reproducible measurement of ADC. METHODS: Institutional Review Board approval was obtained for the prospective study of healthy volunteers and written consent acquired for the retrospective study of patient images. All scanning was performed on a 3.0-T MRI scanner. Scanner stability was assessed using an ice-water phantom weekly for 12 weeks. Inter-scan repeatability was assessed across two scans of 10 healthy volunteers (26-61 years; mean: 44.7 years). Inter- and intra-reader analysis repeatability was measured in 52 carcinomas from clinical patients (29-70 years; mean: 50.0 years) by measuring the whole tumor ADC value on a single slice with maximum tumor diameter (ADCS) and the ADC value of a small region of interest (ROI) on the same slice (ADCmin). Repeatability was assessed using intraclass correlation coefficients (ICC) and coefficients of repeatability (CoR). RESULTS: Scanner stability contributed 6% error to phantom ADC measurements (0.071×10(-3)mm(2)/s; mean ADC=1.089×10(-3)mm(2)/s). The measured scan-scan CoR in the volunteers was 0.122×10(-3)mm(2)/s, contributing an error of 8% to the mean measured values (ADCscan1=1.529×10(-3)mm(2)/s; ADCscan2=1.507×10(-3)mm(2)/s). Technical and clinical observers demonstrated excellent intra-observer repeatability (ICC>0.9). Clinical observer CoR values were marginally better than technical observer measures (ADCS=0.035×10(-3)mm(2)/s vs. 0.097×10(-3)mm(2)/s; ADCmin=0.09×10(-3)mm(2)/s vs. 0.114×10(-3)mm(2)/s). Inter-reader ICC values were good 0.864 (ADCS) and fair 0.677 (ADCmin). Corresponding CoR values were 0.202×10(-3)mm(2)/s and 0.264×10(-3)mm(2)/s, respectively. CONCLUSIONS: Both scanner stability and scan-scan variation have minimal influence on breast ADC measurements, contributing less than 10% error of average measured ADC values. Measurement of ADC values from a small ROI contributes a greater variability in measurements compared with measurement of ADC across the whole visible tumor on one slice. The greatest source of error in follow-up studies is likely to be associated with measures made by multiple observers, and this should be considered where multiple measures are required to assess response to treatment.
Authors: Hubert Bickel; Katja Pinker; Stephan Polanec; Heinrich Magometschnigg; Georg Wengert; Claudio Spick; Wolfgang Bogner; Zsuzsanna Bago-Horvath; Thomas H Helbich; Pascal Baltzer Journal: Eur Radiol Date: 2016-08-30 Impact factor: 5.315
Authors: David C Newitt; Zheng Zhang; Jessica E Gibbs; Savannah C Partridge; Thomas L Chenevert; Mark A Rosen; Patrick J Bolan; Helga S Marques; Sheye Aliu; Wen Li; Lisa Cimino; Bonnie N Joe; Heidi Umphrey; Haydee Ojeda-Fournier; Basak Dogan; Karen Oh; Hiroyuki Abe; Jennifer Drukteinis; Laura J Esserman; Nola M Hylton Journal: J Magn Reson Imaging Date: 2018-10-22 Impact factor: 4.813
Authors: Kathryn E Keenan; Adele P Peskin; Lisa J Wilmes; Sheye O Aliu; Ella F Jones; Wen Li; John Kornak; David C Newitt; Nola M Hylton Journal: J Magn Reson Imaging Date: 2016-03-23 Impact factor: 4.813
Authors: Anna G Sorace; Chengyue Wu; Stephanie L Barnes; Angela M Jarrett; Sarah Avery; Debra Patt; Boone Goodgame; Jeffery J Luci; Hakmook Kang; Richard G Abramson; Thomas E Yankeelov; John Virostko Journal: J Magn Reson Imaging Date: 2018-03-23 Impact factor: 4.813