Wenchao Han1,2,3, Carol Johnson1, Andrew Warner1, Mena Gaed4, Jose A Gomez4, Madeleine Moussa4, Joseph Chin5,6, Stephen Pautler5,6, Glenn Bauman2,3,5, Aaron D Ward1,2,3,5. 1. Baines Imaging Research Laboratory, London Regional Cancer Program, London, Canada. 2. Lawson Health Research Institute, London, Ontario, Canada. 3. Western University, Department of Medical Biophysics, London, Ontario, Canada. 4. Western University, Department of Pathology and Laboratory Medicine, London, Ontario, Canada. 5. Western University, Department of Oncology, London, Ontario, Canada. 6. Western University, Department of Surgery, London, Ontario, Canada.
Abstract
Purpose: Automatic cancer detection on radical prostatectomy (RP) sections facilitates graphical and quantitative surgical pathology reporting, which can potentially benefit postsurgery follow-up care and treatment planning. It can also support imaging validation studies using a histologic reference standard and pathology research studies. This problem is challenging due to the large sizes of digital histopathology whole-mount whole-slide images (WSIs) of RP sections and staining variability across different WSIs. Approach: We proposed a calibration-free adaptive thresholding algorithm, which compensates for staining variability and yields consistent tissue component maps (TCMs) of the nuclei, lumina, and other tissues. We used and compared three machine learning methods for classifying each cancer versus noncancer region of interest (ROI) throughout each WSI: (1) conventional machine learning methods and 14 texture features extracted from TCMs, (2) transfer learning with pretrained AlexNet fine-tuned by TCM ROIs, and (3) transfer learning with pretrained AlexNet fine-tuned with raw image ROIs. Results: The three methods yielded areas under the receiver operating characteristic curve of 0.96, 0.98, and 0.98, respectively, in leave-one-patient-out cross validation using 1.3 million ROIs from 286 mid-gland whole-mount WSIs from 68 patients. Conclusion: Transfer learning with the use of TCMs demonstrated state-of-the-art overall performance and is more stable with respect to sample size across different tissue types. For the tissue types involving Gleason 5 (most aggressive) cancer, it achieved the best performance compared to the other tested methods. This tool can be translated to clinical workflow to assist graphical and quantitative pathology reporting for surgical specimens upon further multicenter validation.
Purpose: Automatic cancer detection on radical prostatectomy (RP) sections facilitates graphical and quantitative surgical pathology reporting, which can potentially benefit postsurgery follow-up care and treatment planning. It can also support imaging validation studies using a histologic reference standard and pathology research studies. This problem is challenging due to the large sizes of digital histopathology whole-mount whole-slide images (WSIs) of RP sections and staining variability across different WSIs. Approach: We proposed a calibration-free adaptive thresholding algorithm, which compensates for staining variability and yields consistent tissue component maps (TCMs) of the nuclei, lumina, and other tissues. We used and compared three machine learning methods for classifying each cancer versus noncancer region of interest (ROI) throughout each WSI: (1) conventional machine learning methods and 14 texture features extracted from TCMs, (2) transfer learning with pretrained AlexNet fine-tuned by TCM ROIs, and (3) transfer learning with pretrained AlexNet fine-tuned with raw image ROIs. Results: The three methods yielded areas under the receiver operating characteristic curve of 0.96, 0.98, and 0.98, respectively, in leave-one-patient-out cross validation using 1.3 million ROIs from 286 mid-gland whole-mount WSIs from 68 patients. Conclusion: Transfer learning with the use of TCMs demonstrated state-of-the-art overall performance and is more stable with respect to sample size across different tissue types. For the tissue types involving Gleason 5 (most aggressive) cancer, it achieved the best performance compared to the other tested methods. This tool can be translated to clinical workflow to assist graphical and quantitative pathology reporting for surgical specimens upon further multicenter validation.
Authors: Andrew J Evans; Pauline C Henry; Theodorus H Van der Kwast; Douglas C Tkachuk; Kemp Watson; Gina A Lockwood; Neil E Fleshner; Carol Cheung; Eric C Belanger; Mahul B Amin; Liliane Boccon-Gibod; David G Bostwick; Lars Egevad; Jonathan I Epstein; David J Grignon; Edward C Jones; Rodolfo Montironi; Madeleine Moussa; Joan M Sweet; Kiril Trpkov; Thomas M Wheeler; John R Srigley Journal: Am J Surg Pathol Date: 2008-10 Impact factor: 6.394
Authors: Patrick Leo; George Lee; Natalie N C Shih; Robin Elliott; Michael D Feldman; Anant Madabhushi Journal: J Med Imaging (Bellingham) Date: 2016-10-24
Authors: Eli Gibson; Cathie Crukley; Mena Gaed; José A Gómez; Madeleine Moussa; Joseph L Chin; Glenn S Bauman; Aaron Fenster; Aaron D Ward Journal: J Magn Reson Imaging Date: 2012-07-31 Impact factor: 4.813
Authors: Lena Gorelick; Olga Veksler; Mena Gaed; Jose A Gomez; Madeleine Moussa; Glenn Bauman; Aaron Fenster; Aaron D Ward Journal: IEEE Trans Med Imaging Date: 2013-05-31 Impact factor: 10.048
Authors: Andrew J Stephenson; Peter T Scardino; Michael W Kattan; Thomas M Pisansky; Kevin M Slawin; Eric A Klein; Mitchell S Anscher; Jeff M Michalski; Howard M Sandler; Daniel W Lin; Jeffrey D Forman; Michael J Zelefsky; Larry L Kestin; Claus G Roehrborn; Charles N Catton; Theodore L DeWeese; Stanley L Liauw; Richard K Valicenti; Deborah A Kuban; Alan Pollack Journal: J Clin Oncol Date: 2007-05-20 Impact factor: 44.544
Authors: Geert Litjens; Clara I Sánchez; Nadya Timofeeva; Meyke Hermsen; Iris Nagtegaal; Iringo Kovacs; Christina Hulsbergen-van de Kaa; Peter Bult; Bram van Ginneken; Jeroen van der Laak Journal: Sci Rep Date: 2016-05-23 Impact factor: 4.379