| Literature DB >> 35933416 |
David Reinecke1, Niklas von Spreckelsen2, Christian Mawrin3,4, Adrian Ion-Margineanu5, Gina Fürtjes2, Stephanie T Jünger2,6, Florian Khalid5, Christian W Freudiger5, Marco Timmer2, Maximilian I Ruge7,6, Roland Goldbrunner2,7, Volker Neuschmelting8,9.
Abstract
Determining the presence of tumor in biopsies and the decision-making during resections is often dependent on intraoperative rapid frozen-section histopathology. Recently, stimulated Raman scattering microscopy has been introduced to rapidly generate digital hematoxylin-and-eosin-stained-like images (stimulated Raman histology) for intraoperative analysis. To enable intraoperative prediction of tumor presence, we aimed to develop a new deep residual convolutional neural network in an automated pipeline and tested its validity. In a monocentric prospective clinical study with 94 patients undergoing biopsy, brain or spinal tumor resection, Stimulated Raman histology images of intraoperative tissue samples were obtained using a fiber-laser-based stimulated Raman scattering microscope. A residual network was established and trained in ResNetV50 to predict three classes for each image: (1) tumor, (2) non-tumor, and (3) low-quality. The residual network was validated on images obtained in three small random areas within the tissue samples and were blindly independently reviewed by a neuropathologist as ground truth. 402 images derived from 132 tissue samples were analyzed representing the entire spectrum of neurooncological surgery. The automated workflow took in a mean of 240 s per case, and the residual network correctly classified tumor (305/326), non-tumorous tissue (49/67), and low-quality (6/9) images with an inter-rater agreement of 89.6% (κ = 0.671). An excellent internal consistency was found among the random areas with 90.2% (Cα = 0.942) accuracy. In conclusion, the novel stimulated Raman histology-based residual network can reliably detect the microscopic presence of tumor and differentiate from non-tumorous brain tissue in resection and biopsy samples within 4 min and may pave a promising way for an alternative rapid intraoperative histopathological decision-making tool.Entities:
Keywords: Artificial intelligence; Brain tumor; Deep learning; Neurosurgery; Stimulated Raman histology; Tissue detection
Mesh:
Substances:
Year: 2022 PMID: 35933416 PMCID: PMC9356422 DOI: 10.1186/s40478-022-01411-x
Source DB: PubMed Journal: Acta Neuropathol Commun ISSN: 2051-5960 Impact factor: 7.578
Fig. 1Demonstration of the semi-automated workflow for SRH image analysis and CNN prediction of tumor and non-tumor tissue. A A squashed unprocessed tumor margin sample acquired by the surgeon of non-small cell lung cancer brain metastasis to test for residual tumor remnants in the resection bed is analyzed in the intraoperative SRH imager. B A digital H&E-like image (SRH) is created. After generating SRH patches (300 × 300-pixel) using a sliding window technique, each patch undergoes a residual CNN algorithm. C The final softmax layer outputs a categorical probability prediction with distribution over three categories: (I) tumor, (II) non-tumor, and (III) low quality. After that, another algorithm is applied for the patch-level prediction probabilities and outputs a single probability for each SRH image after summing. A semantic segmentation technique that overlays CNN prediction heatmaps was also developed and applied to facilitate the qualitative identification of regions with tumor, non-tumor, and low quality. D Transparency CNN prediction heatmaps were RGB color-coded (red = tumor, green = non-tumor, blue = low quality) and overlaid on the SRH image to provide identification and differentiation for surgeons and neuropathologist beside prediction probabilities. Scale bars = 100 μm
Fig. 2Stepwise semantic segmentation of SRH images for regions with tumor, non-tumor, and low quality. SRH images on the left side are shown before segmentation. In the middle probability, heatmaps are demonstrated for each output P (tumor, non-tumor, low quality). Using a sliding window algorithm, smaller parts in the SRH images created a probability distribution for each output. It is a function of neighboring overlapping patch predictions to generate a smoother overall heatmap after summing each part of the SRH images. Each heatmap is RGB color-coded as an overlay on the SRH image. Demonstration of prediction heatmap (A) with tumor (red) and low quality (blue) regions out of a Non-Hodgkin lymphoma specimen (* and + with atypical cell components), B with corresponding outputs classes from a non-small cell lung cancer brain metastasis specimen, C as well as only tumor (red) and non-tumor (green) prediction from an IDH-wildtype glioblastoma specimen (the arrow demonstrates the infiltrative tumor character). Scale bars = 100 μm
Fig. 3Step by step segmentation by analyzing patch-pixels. A Examples of SRH patches with metastatic tumor area (red box), a non-tumor area with reactive astrocytes (green box) and low-quality area without cells (blue box), SRH from a cervical squamous cell carcinoma brain metastasis. B Pixel-level probability heatmaps of each output class after patch-passing through all three residual CNNs in comparison. Small differences in classification at the patch size level did not affect the overall prediction output for the whole SRH image. See the left side of the asterisk (*) in the second CNN model. C The overall probability heatmaps after summing all patch predictions and mapped as a semi-transparent overlay to assist surgeon and neuropathologist for SRH image interpretation in addition to residual CNNs predictions. Scale bars = 100 μm
Overall distribution of the diagnostic classes from the residual CNNs and the independent neuropathologist with the inter-rater agreement between CNNs correct predictions and the neuropathologist (NP)
| NP | CNN 1 | CNN 2 | CNN 3 | ||
|---|---|---|---|---|---|
| SRH images | n = 592 | ||||
| (All) | 100% | ||||
| Tumor | 462 (78.0%) | 459 (77.5%) | 456 (77%) | 469 (79.2%) | |
| Non-tumor | 113 (19.1%) | 115 (19.4%) | 117 (19.8%) | 110 (18.6%) | |
| Low quality | 17 (2.9%) | 18 (3%) | 19 (3.2%) | 13 (2.2%) | |
| SRH images (three random areas) | n = 402 | ||||
| (Ground truth) | |||||
| Tumor | 326 (100%) | 297 (91.1%) | 298 (91.4%) | 305 (95.0%) | |
| Non-tumor | 67 (100%) | 46 (68.7%) | 48 (71.6%) | 49 (73.1%) | |
| Low quality | 9 (100%) | 5 (55.6%) | 5 (55.6%) | 6 (66.7%) |
Comparison of prediction probabilities of the three trained residual convolutional neural networks (CNN) in mean and standard deviation, and Intraclass correlation coefficient (ICC)
| SRH images (n = 402) mean probability (± SD) | CNN 1 | CNN 2 | CNN 3 | Reliability (ICC) |
|---|---|---|---|---|
| Tumor | 73.9 (± 33.2) | 76.9 (± 35.0) | 76.5 (± 33.7) | 0.962 (99% CI 0.953–0.969) |
| non-tumor | 18.9 (± 33.1) | 18.0 (± 32.7) | 18.2 (± 31.6) | 0.977 (99% CI 0.973–0.981) |
| Low quality | 7.2 (± 15.2) | 5.1 (± 16.4) | 5.3 (± 15.7) | 0.914 (99% CI 0.895–0.929) |
Fig. 4SRH image prediction ROC curves of the three trained and applied CNNs for A tumor, B non-tumor and C low quality tissue, ROC-AUC were similar across each CNN and are showing excellent diagnostic quality in accordance with the independent neuropathological review as ground truth
Fig. 5Visual demonstration and internal consistency of the third CNN. A Shown is a specimen slide from the surgical approach, close to the tumor margin of a non-small cell lung cancer metastasis, with three random areas (A, B, C). B Digital H&E-like images A–C. C Visualization of prediction heatmaps for each output class. D Shown are the overlaying heatmaps with the largest and correctly calculated green area part for non-tumorous tissue (white matter). The asterisk and arrow in the middle image demonstrate small non-significant differences at patch size level between random areas. Scale bars = 100 μm