Stephen F Kry1, Andrea Molineu2, James R Kerns3, Austin M Faught3, Jessie Y Huang3, Kiley B Pulliam3, Jackie Tonigan3, Paola Alvarez2, Francesco Stingo4, David S Followill3. 1. Imaging and Radiation Oncology Core at Houston, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas. Electronic address: sfkry@mdanderson.org. 2. Imaging and Radiation Oncology Core at Houston, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas. 3. Imaging and Radiation Oncology Core at Houston, Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas; The University of Texas Health Science Center Houston, Graduate School of Biomedical Sciences, Houston, Texas. 4. The University of Texas Health Science Center Houston, Graduate School of Biomedical Sciences, Houston, Texas; Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas.
Abstract
PURPOSE: To determine whether in-house patient-specific intensity modulated radiation therapy quality assurance (IMRT QA) results predict Imaging and Radiation Oncology Core (IROC)-Houston phantom results. METHODS AND MATERIALS: IROC Houston's IMRT head and neck phantoms have been irradiated by numerous institutions as part of clinical trial credentialing. We retrospectively compared these phantom results with those of in-house IMRT QA (following the institution's clinical process) for 855 irradiations performed between 2003 and 2013. The sensitivity and specificity of IMRT QA to detect unacceptable or acceptable plans were determined relative to the IROC Houston phantom results. Additional analyses evaluated specific IMRT QA dosimeters and analysis methods. RESULTS: IMRT QA universally showed poor sensitivity relative to the head and neck phantom, that is, poor ability to predict a failing IROC Houston phantom result. Depending on how the IMRT QA results were interpreted, overall sensitivity ranged from 2% to 18%. For different IMRT QA methods, sensitivity ranged from 3% to 54%. Although the observed sensitivity was particularly poor at clinical thresholds (eg 3% dose difference or 90% of pixels passing gamma), receiver operator characteristic analysis indicated that no threshold showed good sensitivity and specificity for the devices evaluated. CONCLUSIONS: IMRT QA is not a reasonable replacement for a credentialing phantom. Moreover, the particularly poor agreement between IMRT QA and the IROC Houston phantoms highlights surprising inconsistency in the QA process.
PURPOSE: To determine whether in-house patient-specific intensity modulated radiation therapy quality assurance (IMRT QA) results predict Imaging and Radiation Oncology Core (IROC)-Houston phantom results. METHODS AND MATERIALS: IROC Houston's IMRT head and neck phantoms have been irradiated by numerous institutions as part of clinical trial credentialing. We retrospectively compared these phantom results with those of in-house IMRT QA (following the institution's clinical process) for 855 irradiations performed between 2003 and 2013. The sensitivity and specificity of IMRT QA to detect unacceptable or acceptable plans were determined relative to the IROC Houston phantom results. Additional analyses evaluated specific IMRT QA dosimeters and analysis methods. RESULTS: IMRT QA universally showed poor sensitivity relative to the head and neck phantom, that is, poor ability to predict a failing IROC Houston phantom result. Depending on how the IMRT QA results were interpreted, overall sensitivity ranged from 2% to 18%. For different IMRT QA methods, sensitivity ranged from 3% to 54%. Although the observed sensitivity was particularly poor at clinical thresholds (eg 3% dose difference or 90% of pixels passing gamma), receiver operator characteristic analysis indicated that no threshold showed good sensitivity and specificity for the devices evaluated. CONCLUSIONS: IMRT QA is not a reasonable replacement for a credentialing phantom. Moreover, the particularly poor agreement between IMRT QA and the IROC Houston phantoms highlights surprising inconsistency in the QA process.
Authors: Andrea Molineu; David S Followill; Peter A Balter; William F Hanson; Michael T Gillin; M Saiful Huq; Avraham Eisbruch; Geoffrey S Ibbott Journal: Int J Radiat Oncol Biol Phys Date: 2005-10-01 Impact factor: 7.038
Authors: Lester J Peters; Brian O'Sullivan; Jordi Giralt; Thomas J Fitzgerald; Andy Trotti; Jacques Bernier; Jean Bourhis; Kally Yuen; Richard Fisher; Danny Rischin Journal: J Clin Oncol Date: 2010-05-17 Impact factor: 44.544
Authors: Lei Dong; John Antolak; Mohammad Salehpour; Kenneth Forster; Laura O'Neill; Robin Kendall; Isaac Rosen Journal: Int J Radiat Oncol Biol Phys Date: 2003-07-01 Impact factor: 7.038
Authors: Sophie Chiavassa; Igor Bessieres; Magali Edouard; Michel Mathot; Alexandra Moignier Journal: Br J Radiol Date: 2019-07-24 Impact factor: 3.039
Authors: Mallory C Glenn; Victor Hernandez; Jordi Saez; David S Followill; Rebecca M Howell; Julianne M Pollard-Larkin; Shouhao Zhou; Stephen F Kry Journal: Phys Med Biol Date: 2018-10-17 Impact factor: 3.609
Authors: Bryce C Allred; Jie Shan; Daniel G Robertson; Todd A DeWees; Jiajian Shen; Wei Liu; Joshua B Stoker Journal: J Appl Clin Med Phys Date: 2021-03-29 Impact factor: 2.102
Authors: Elizabeth M McKenzie; Peter A Balter; Francesco C Stingo; Jimmy Jones; David S Followill; Stephen F Kry Journal: Med Phys Date: 2014-12 Impact factor: 4.071
Authors: Austin M Faught; Scott E Davidson; Jonas Fontenot; Stephen F Kry; Carol Etzel; Geoffrey S Ibbott; David S Followill Journal: Med Phys Date: 2017-08-01 Impact factor: 4.071
Authors: Paige A Taylor; Jessica Lowenstein; David Followill; Stephen F Kry Journal: Int J Radiat Oncol Biol Phys Date: 2021-11-13 Impact factor: 7.038
Authors: James R Kerns; Francesco Stingo; David S Followill; Rebecca M Howell; Adam Melancon; Stephen F Kry Journal: Int J Radiat Oncol Biol Phys Date: 2017-04-04 Impact factor: 7.038