Literature DB >> 32391430

An automated tissue-to-diagnosis pipeline using intraoperative stimulated Raman histology and deep learning.

Todd C Hollon1, Daniel A Orringer2.   

Abstract

We recently developed and validated a bedside tissue-to-diagnosis pipeline using stimulated Raman histology (SRH), a label-free optical imaging method, and deep convolutional neural networks (CNN) in prospective clinical trial. Our CNN learned a hierarchy of interpretable histologic features found in the most common brain tumors and was able to accurately segment cancerous regions in SRH images.
© 2020 The Author(s). Published with license by Taylor & Francis Group, LLC.

Entities:  

Keywords:  Stimulated Raman histology; convolutional neural networks; deep learning; label-free imaging

Year:  2020        PMID: 32391430      PMCID: PMC7199763          DOI: 10.1080/23723556.2020.1736742

Source DB:  PubMed          Journal:  Mol Cell Oncol        ISSN: 2372-3556


Conventional methods for intraoperative tissue diagnosis have remained largely unchanged for nearly 100 years in surgical oncology.[1] Standard light microscopy is used in combination with hematoxylin and eosin (H&E) staining to provide image contrast for interpretation by a clinical pathologist. The conventional method for intraoperative histology is cumbersome, requiring tissue transport to a remote pathology laboratory, processing by skilled laboratory technicians, and interpretation by a board-certified pathologist. Each of these steps is a potential barrier to providing efficient, reproducible, and accurate intraoperative cancer diagnosis. Moreover, there is currently a discrepancy between the number of board-certified pathologists and the number of medical centers performing cancer surgery.[2] In the setting of neurosurgical oncology, this imbalance is expected to increase in the coming years, with up to 40% of neuropathology fellowship positions remaining vacant.[3] To augment the existing intraoperative pathology workflow and address the contracting workforce, we recently developed a parallel diagnostic pipeline that combines fiber-laser-based optical imaging and deep learning to provide near-real time brain tumor diagnosis.[4] Stimulated Raman histology (SRH) is a label-free, optical imaging method that provides submicron-resolution images of fresh, unprocessed biological tissues. SRH uses the intrinsic vibrational properties of biological macromolecules (e.g., proteins, lipids, nucleic acids) to generate image contrast.[5] We have previously shown that SRH is able to capture classic diagnostic image features seen in brain tumors (e.g., microvascular proliferation in glioblastoma, glandular formation in metastatic adenocarcinoma), in addition to histologic findings not seen in conventional H&E histology, including myelin-rich axons and lipid droplets.[2] Because SRH requires no tissue processing, image interpretation is not complicated by freezing and section artifact that results from conventional processing in intraoperative H&E histology. SRH images are natively digital and we had previously demonstrated that SRH is ideally suited for computer-augmented diagnostic techniques(refs).[2,6] Our previous methods required manual feature engineering due to limited data size. Advances in computer vision for classification tasks have demonstrated that explicit feature engineering can result in decreased accuracy.[7] Therefore, armed with 2.5 million SRH images, we aimed to train a CNN, composed of a trainable feature extractor for optimal performance. Human-level accuracy has been achieved on simulated diagnostic tasks with deep neural networks across several clinical domains, including ophthalmology,[8] radiology,[9] and dermatology.[10] Our pipeline consisted of three steps: 1) image acquisition, 2) image processing, and 3) CNN diagnostic prediction (Figure 1). An unprocessed surgical specimen are passed off the surgical field and a small sample (e.g., 3 mm3) is compressed into a customized microscope slide. Images are then acquired at two Raman shifts, 2,845 cm−1 and 2930 cm−1. Second, a dense sliding window algorithm is used to generate semi-overlapping, high-resolution and high-magnification SRH patches to be used at both training and inference. We implemented and trained the benchmarked Inception-ResNet-v2 architecture with randomly initialized weights to classify 13 histologic classes.
Figure 1.

The SRH-CNN pipeline for automated intraoperative brain tumor diagnosis. (1) A patient newly diagnosed with a brain lesion undergoes a brain biopsy or planned resection. Fresh tissue is loaded directly into a stimulated Raman histology (SRH) imager for image acquisition. Images are acquired at two Raman shifts, 2,845  cm−1 and 2,930  cm−1, and a third image channel is generated via pixel-wise subtraction. Time to acquire a 1 × 1-mm2 SRH image is approximately 2 min. (2) A dense sliding window algorithm generates image patches that are preprocessed to optimize image contrast. (3) Each patch undergoes a feedforward pass through the network. Our inference algorithm is designed to retain the patches with high probability of being diagnostic, filtering the regions that are normal or nondiagnostic. Patch-level predictions from tumor regions are then summed and renormalized to generate a patient-level probability distribution over the diagnostic classes. Our pipeline can provide tissue diagnoses in <2.5 min using a 1 × 1-mm2 image, decreasing time to diagnosis by a factor of 10 compared with conventional intraoperative histology.

MRI, magnetic resonance imaging; H&E, hematoxylin and eosin; CNN, convolutional neural network.

The SRH-CNN pipeline for automated intraoperative brain tumor diagnosis. (1) A patient newly diagnosed with a brain lesion undergoes a brain biopsy or planned resection. Fresh tissue is loaded directly into a stimulated Raman histology (SRH) imager for image acquisition. Images are acquired at two Raman shifts, 2,845  cm−1 and 2,930  cm−1, and a third image channel is generated via pixel-wise subtraction. Time to acquire a 1 × 1-mm2 SRH image is approximately 2 min. (2) A dense sliding window algorithm generates image patches that are preprocessed to optimize image contrast. (3) Each patch undergoes a feedforward pass through the network. Our inference algorithm is designed to retain the patches with high probability of being diagnostic, filtering the regions that are normal or nondiagnostic. Patch-level predictions from tumor regions are then summed and renormalized to generate a patient-level probability distribution over the diagnostic classes. Our pipeline can provide tissue diagnoses in <2.5 min using a 1 × 1-mm2 image, decreasing time to diagnosis by a factor of 10 compared with conventional intraoperative histology. MRI, magnetic resonance imaging; H&E, hematoxylin and eosin; CNN, convolutional neural network. To rigorously test the diagnostic performance of our trained CNN, we performed a multicenter, prospective, non-inferiority, randomized clinical trial comparing conventional H&E histology with pathologist interpretation (control arm) versus SRH with CNN-based interpretation (experimental arm). A total of 278 patients from three tertiary medical centers were included in the study. Overall diagnostic accuracy was 93.9% in the control arm and 94.6% in the SRH-CNN arm, demonstrating that our parallel tissue-to-diagnosis pipeline was noninferior to the current standard of care. Additionally, diagnostic errors were mutually exclusive; errors made by study pathologists in the control arm were correctly classified by the SRH-CNN pipeline, and vice versa. These results indicate that two diagnostic pathways are complementary and the combination of human expertise and artificial intelligence has the potential to improve intraoperative decision making in surgical oncology. We used neuron activation maximization, a method to qualitatively evaluate the learned latent representations of deep neural networks, to improve the interpretability of our trained CNN. This revealed a hierarchy of SRH feature representations with increasingly complex cytologic and histoarchitectural structures being detected in higher-layers of our CNN. Myelinated axons, high nuclear-cytoplasmic ratios, lipid droplets, pleomorphism, and chromatin organization were differentially detected across brain tumor subtypes. These demonstrate that our CNN learned the diagnostic importance of specific histomorphologic, cytologic, and nuclear features classically used by pathologists to diagnose brain cancer. Finally, we developed a SRH semantic segmentation method designed to identify the diagnostic and cancerous regions with whole-slide SRH images. By using a semi-overlapping, sliding window method for patch generation, every pixel in an SRH image has an probability distribution over the diagnostic classes that is a function of the local overlapping patch-level predictions. A red-green-blue transparency indicating tumor tissue, normal/non-neoplastic tissue and nondiagnostic regions, respectively, allows for image overlay of pixel-level CNN predictions. Our semantic segmentation method achieved a mean intersection-over-union value of 61.6 ± 28.6 for the ground truth diagnostic class and 86.0 ± 19.2 for the tumor inference class, for patients in our prospective study cohort. In our study, we demonstrated how combining SRH with deep neural networks can be used to rapidly evaluate fresh surgical specimens and provide intraoperative brain tumor diagnosis. Our pipeline provides a validated means of delivering expert-level intraoperative diagnosis where neuropathology resources are scarce, and augmenting diagnostic accuracy in resource-rich centers. The workflow allows surgeons to access histologic data in near real-time, enabling more seamless use of histology to inform surgical decision-making based on microscopic evaluation of intraoperative specimens.
  7 in total

1.  Pathologist workforce in the United States: I. Development of a predictive model to examine factors influencing supply.

Authors:  Stanley J Robboy; Sally Weintraub; Andrew E Horvath; Bradden W Jensen; C Bruce Alexander; Edward P Fody; James M Crawford; Jimmy R Clark; Julie Cantor-Weinberg; Megha G Joshi; Michael B Cohen; Michael B Prystowsky; Sarah M Bean; Saurabh Gupta; Suzanne Z Powell; V O Speights; David J Gross; W Stephen Black-Schaffer
Journal:  Arch Pathol Lab Med       Date:  2013-06-05       Impact factor: 5.534

2.  Automated deep-neural-network surveillance of cranial images for acute neurologic events.

Authors:  Joseph J Titano; Marcus Badgeley; Javin Schefflein; Margaret Pain; Andres Su; Michael Cai; Nathaniel Swinburne; John Zech; Jun Kim; Joshua Bederson; J Mocco; Burton Drayer; Joseph Lehar; Samuel Cho; Anthony Costa; Eric K Oermann
Journal:  Nat Med       Date:  2018-08-13       Impact factor: 53.440

3.  Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

Authors:  Varun Gulshan; Lily Peng; Marc Coram; Martin C Stumpe; Derek Wu; Arunachalam Narayanaswamy; Subhashini Venugopalan; Kasumi Widner; Tom Madams; Jorge Cuadros; Ramasamy Kim; Rajiv Raman; Philip C Nelson; Jessica L Mega; Dale R Webster
Journal:  JAMA       Date:  2016-12-13       Impact factor: 56.272

4.  Rapid Intraoperative Diagnosis of Pediatric Brain Tumors Using Stimulated Raman Histology.

Authors:  Todd C Hollon; Spencer Lewis; Balaji Pandian; Yashar S Niknafs; Mia R Garrard; Hugh Garton; Cormac O Maher; Kathryn McFadden; Matija Snuderl; Andrew P Lieberman; Karin Muraszko; Sandra Camelo-Piragua; Daniel A Orringer
Journal:  Cancer Res       Date:  2017-11-01       Impact factor: 12.701

5.  Dermatologist-level classification of skin cancer with deep neural networks.

Authors:  Andre Esteva; Brett Kuprel; Roberto A Novoa; Justin Ko; Susan M Swetter; Helen M Blau; Sebastian Thrun
Journal:  Nature       Date:  2017-01-25       Impact factor: 49.962

6.  Label-free biomedical imaging with high sensitivity by stimulated Raman scattering microscopy.

Authors:  Christian W Freudiger; Wei Min; Brian G Saar; Sijia Lu; Gary R Holtom; Chengwei He; Jason C Tsai; Jing X Kang; X Sunney Xie
Journal:  Science       Date:  2008-12-19       Impact factor: 47.728

7.  Near real-time intraoperative brain tumor diagnosis using stimulated Raman histology and deep neural networks.

Authors:  Todd C Hollon; Balaji Pandian; Arjun R Adapa; Esteban Urias; Akshay V Save; Siri Sahib S Khalsa; Daniel G Eichberg; Randy S D'Amico; Zia U Farooq; Spencer Lewis; Petros D Petridis; Tamara Marie; Ashish H Shah; Hugh J L Garton; Cormac O Maher; Jason A Heth; Erin L McKean; Stephen E Sullivan; Shawn L Hervey-Jumper; Parag G Patil; B Gregory Thompson; Oren Sagher; Guy M McKhann; Ricardo J Komotar; Michael E Ivan; Matija Snuderl; Marc L Otten; Timothy D Johnson; Michael B Sisti; Jeffrey N Bruce; Karin M Muraszko; Jay Trautman; Christian W Freudiger; Peter Canoll; Honglak Lee; Sandra Camelo-Piragua; Daniel A Orringer
Journal:  Nat Med       Date:  2020-01-06       Impact factor: 53.440

  7 in total
  2 in total

1.  Novel rapid intraoperative qualitative tumor detection by a residual convolutional neural network using label-free stimulated Raman scattering microscopy.

Authors:  David Reinecke; Niklas von Spreckelsen; Christian Mawrin; Adrian Ion-Margineanu; Gina Fürtjes; Stephanie T Jünger; Florian Khalid; Christian W Freudiger; Marco Timmer; Maximilian I Ruge; Roland Goldbrunner; Volker Neuschmelting
Journal:  Acta Neuropathol Commun       Date:  2022-08-06       Impact factor: 7.578

2.  Automatic cell counting from stimulated Raman imaging using deep learning.

Authors:  Qianqian Zhang; Kyung Keun Yun; Hao Wang; Sang Won Yoon; Fake Lu; Daehan Won
Journal:  PLoS One       Date:  2021-07-21       Impact factor: 3.240

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.