| Literature DB >> 32153349 |
Tahsin Kurc1, Spyridon Bakas2,3,4, Xuhua Ren5, Aditya Bagari6, Alexandre Momeni7, Yue Huang8, Lichi Zhang5, Ashish Kumar6, Marc Thibault7, Qi Qi8, Qian Wang5, Avinash Kori6, Olivier Gevaert7, Yunlong Zhang8, Dinggang Shen9,10, Mahendra Khened6, Xinghao Ding8, Ganapathy Krishnamurthi6, Jayashree Kalpathy-Cramer11, James Davis12, Tianhao Zhao12, Rajarsi Gupta1,12, Joel Saltz1, Keyvan Farahani13.
Abstract
Biomedical imaging Is an important source of information in cancer research. Characterizations of cancer morphology at onset, progression, and in response to treatment provide complementary information to that gleaned from genomics and clinical data. Accurate extraction and classification of both visual and latent image features Is an increasingly complex challenge due to the increased complexity and resolution of biomedical image data. In this paper, we present four deep learning-based image analysis methods from the Computational Precision Medicine (CPM) satellite event of the 21st International Medical Image Computing and Computer Assisted Intervention (MICCAI 2018) conference. One method Is a segmentation method designed to segment nuclei in whole slide tissue images (WSIs) of adult diffuse glioma cases. It achieved a Dice similarity coefficient of 0.868 with the CPM challenge datasets. Three methods are classification methods developed to categorize adult diffuse glioma cases into oligodendroglioma and astrocytoma classes using radiographic and histologic image data. These methods achieved accuracy values of 0.75, 0.80, and 0.90, measured as the ratio of the number of correct classifications to the number of total cases, with the challenge datasets. The evaluations of the four methods indicate that (1) carefully constructed deep learning algorithms are able to produce high accuracy in the analysis of biomedical image data and (2) the combination of radiographic with histologic image information improves classification performance.Entities:
Keywords: classification; deep learning; digital pathology; image analysis; radiology; segmentation
Year: 2020 PMID: 32153349 PMCID: PMC7046596 DOI: 10.3389/fnins.2020.00027
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1Tissue image segmentation model. The first part of the model consists of the Mask-RCNN module. Output from this module is input to the MASK-NMS module for final segmentation prediction output.
FIGURE 2Radiology image analysis. Images are pre-processed (i.e., skull stripping and co-registration) before they are analyzed through the remaining steps of the analysis pipeline. After the pre-processing step, tumor regions in the images are segmented via a CNN model. This step is followed by computation of a set of 105 radiomic features in segmented regions. The high-dimensional feature vector is reduced to a 16-dimensional feature vector using the principle component analysis method. A classification network is trained with these feature vectors.
FIGURE 3Pathology image analysis. A region-of-interest (ROI) step detects and segments tissue regions. The tissue regions are partitioned into patches. Distinct patches are filtered using the isolation forest technique. The prediction represents the probability values of the case being astrocytoma or oligodendroglioma.
FIGURE 4Combining predictions from the pathology and radiology models. A test case is analyzed by the radiology classification model and the pathology classification model. The results from the two models are processed in a confidence-based voting step, which chooses the class with the highest prediction probability value.
FIGURE 5Radiology image analysis pipeline. Radiology images are pre-processed for bias field correction, skull stripping, and co-registration before they are input to a 3D CNN. The 3D CNN is trained to output a prediction (probability) value for each case as to whether the case is oligodendroglioma (O) or astrocytoma (A).
FIGURE 6Histopathology image analysis pipeline. The whole slide tissue images are pre-processed to detect tissue, do color normalization, and extract tiles. The tiles are input to a DenseNet model for classification. The model outputs the probability of a case being oligodendroglioma or astrocytoma.
FIGURE 7Ensemble model that combines classifications from the radiology and histopathology image analysis pipelines.
FIGURE 8The flow of the entire method. Among them, we slice the entire pathological data and extract the effective diseased area as much as possible. The active learning strategy follows our work in Qi et al. (2018). The goal of that work is to maximize learning accuracy from very limited labeling data. The classification model is updated iteratively with an increasing training set. The sliced pathological data are sent to a convolutional neural network to obtain the discrimination results of the pathological data. The radiological data are sent to the Unet and CNN to obtain classification results after preprocessing. Finally, the results are combined via a weighted average operation to obtain the final result.
FIGURE 9The classification process of radiology images. The process aligns the images of different modalities through realignment and co-register, extracts brain tissue through skull stripping, extracts lesion area by the U-Net, and classifies cases by CNN.
FIGURE 10Segmentation result in validation dataset. The left column shows tissue images. The middle column is ground truth masks. The right column shows results from the segmentation method.
Accuracy scores of the classification methods presented in Section “Methods for Classification of Brain Cancer Cases.”
| Section “An Approach for Classification Of Low-Grade Gliomas Using Combined Radiology and Pathology Image Data” | 0.90 |
| Section “Dropout-Enabled Ensemble Learning for Multi-Scale Biomedical Image Classification” | 0.80 |
| Section “A Weighted Average-Based Classification Method” | 0.75 |