| Literature DB >> 31681583 |
Konstantinos Zormpas-Petridis1,2, Henrik Failmezger3, Shan E Ahmed Raza3, Ioannis Roxanis3,4, Yann Jamin1,2, Yinyin Yuan3.
Abstract
Computational pathology-based cell classification algorithms are revolutionizing the study of the tumor microenvironment and can provide novel predictive/prognosis biomarkers crucial for the delivery of precision oncology. Current algorithms used on hematoxylin and eosin slides are based on individual cell nuclei morphology with limited local context features. Here, we propose a novel multi-resolution hierarchical framework (SuperCRF) inspired by the way pathologists perceive regional tissue architecture to improve cell classification and demonstrate its clinical applications. We develop SuperCRF by training a state-of-art deep learning spatially constrained- convolution neural network (SC-CNN) to detect and classify cells from 105 high-resolution (20×) H&E-stained slides of The Cancer Genome Atlas melanoma dataset and subsequently, a conditional random field (CRF) by combining cellular neighborhood with tumor regional classification from lower resolution images (5, 1.25×) given by a superpixel-based machine learning framework. SuperCRF led to an 11.85% overall improvement in the accuracy of the state-of-art deep learning SC-CNN cell classifier. Consistent with a stroma-mediated immune suppressive microenvironment, SuperCRF demonstrated that (i) a high ratio of lymphocytes to all lymphocytes within the stromal compartment (p = 0.026) and (ii) a high ratio of stromal cells to all cells (p < 0.0001 compared to p = 0.039 for SC-CNN only) are associated with poor survival in patients with melanoma. SuperCRF improves cell classification by introducing global and local context-based information and can be implemented in combination with any single-cell classifier. SuperCRF provides valuable tools to study the tumor microenvironment and identify predictors of survival and response to therapy.Entities:
Keywords: cell classification; conditional random fields; deep learning; digital pathology; machine learning; melanoma; tumor microenvironment
Year: 2019 PMID: 31681583 PMCID: PMC6798642 DOI: 10.3389/fonc.2019.01045
Source DB: PubMed Journal: Front Oncol ISSN: 2234-943X Impact factor: 6.244
Figure 1Overview of the SuperCRF framework for analyzing H&E-stained pathological images of melanoma. (A) Major histological features of melanoma architecture. (B) Projection of regional classification results using superpixels from various scales to the 20× magnification for the improvement of single-cell classification. (C) Graphical representation of node dependencies (cells and superpixels) across different scales. (D) Region classification scheme using a superpixel based machine-learning method in whole-slide images (5× and 1.25× magnification) (E) Single-cell classification using a state-of-the-art spatially constrained-convolution neural network (SC-CNN) classifier (F) representative results of the SC-CNN cell classifier alone and combined with our SuperCRF system. Note the misclassification of various stromal cells by the SC-CNN, which are corrected by our model.
Summary of the data used to train and test the different parts of the SuperCRF system, as well as study the cancer-immune-stroma interface (also, see Supplementary Tables 1–4).
| Single-cell classification into four categories: cancer cells, lymphocytes, stromal cells, epidermal cells | ||
| Region classification into five categories: tumor, normal stroma, lymphocyte cluster, normal epidermis, lumen/white space | ||
| Region classification into four categories: tumor, normal stroma, normal epidermis, lumen/white space | ||
| Study of the tumor-stroma interface. To accelerate the analysis, 50 tiles (2,000 × 2,000 pixels) containing tumors were randomly sampled from every whole-slide image (WSI) |
The values are bold for visual (illustration) purposes.
Figure 2Representative examples of both superpixel and single-cell classification with or without SuperCRF. (A) Superpixels-based regional classification on representative whole slide images (5× magnification) of melanoma. Green: tumor area, Red: stroma area, Blue: normal epidermis, Yellow: lymphocyte cluster. (B) Representative images showing cell classification using a state-of-the-art spatially constrained-convolution neural network (SC-CNN) and four conditional random fields (CRF) models. Note the mislabeling of many cancer and stromal cells as epidermis cells when using the SC-CNN and the gradual increase in classification accuracy with the best accuracy achieved with the SuperCRF. Green, cancer cells; Red, stromal cells; Blue, lymphocytes; Yellow, epidermis cells.
Evaluation of different conditional random fields (CRF) versions and a state-of-the-art spatially constrained-convolution neural network (SC-CNN) deep learning cell-classifier.
| SC-CNN | 84.63 | 0.8756 | 0.8808 |
| singleCellCRF | 87.61 | 0.8973 | 0.8946 |
| CRF1.25× | 90.79 | 0.9248 | 0.9110 |
| CRF5× | 91.70 | 0.9298 | 0.9126 |
| SuperCRF |
The values are bold to indicate the highest achieved accuracy, precision and recall.
Figure 3Associations between survival outcomes and SuperCRF-define risk groups in the Cancer Genome Atlas (TCGA) cohorts of patients with melanoma. (A) Kaplan-Meier Survival curves for patients in the high-risk group (blue) and low risk group classified by stromal cells ratio derived from SuperCRF (left) and using only the SC-CNN classifier. Note the difference in the p-value using the two methods. (B) Kaplan-Meier Survival curves for patients in the high-risk group (blue) and low risk group classified by immune phenotype based on spatial distribution of lymphocytes in different tumor compartments derived from SuperCRF.