| Literature DB >> 27563487 |
Aïcha BenTaieb1, Masoud S Nosrati1, Hector Li-Chang2, David Huntsman2, Ghassan Hamarneh1.
Abstract
CONTEXT: It has been shown that ovarian carcinoma subtypes are distinct pathologic entities with differing prognostic and therapeutic implications. Histotyping by pathologists has good reproducibility, but occasional cases are challenging and require immunohistochemistry and subspecialty consultation. Motivated by the need for more accurate and reproducible diagnoses and to facilitate pathologists' workflow, we propose an automatic framework for ovarian carcinoma classification.Entities:
Keywords: Computer-aided diagnosis; machine learning; ovarian carcinoma
Year: 2016 PMID: 27563487 PMCID: PMC4977973 DOI: 10.4103/2153-3539.186899
Source DB: PubMed Journal: J Pathol Inform
Figure 1Patch extraction from tissue sections. (a) Tissue samples of the five main recognized ovarian carcinoma types. HGSC: High-grade serous carcinoma; EN: Endometrioid; MC: Mucinous; CC: Clear cell and LGSC: Low-grade serous carcinoma. (b) Low-resolution (×20) patch extraction. Twenty nonoverlapping patches were extracted automatically from each whole tissue slide. (c) High-resolution (×40) patch extraction. One hundred patches were extracted from each low-resolution patch
Figure 2Overview of the proposed automatic ovarian carcinoma classification pipeline
Figure 3Segmentation results. (a) Segmentation output. The image is partitioned into clusters of similar color to detect the principal tissue components. We create a mask of each tissue component: Nuclei (blue), cytoplasm (green), stroma (yellow) and background (white). (b) Nuclei shape analysis using ellipse (green) fitting. The ellipse shape approximates each nucleus radius, elongation and area
Features used for classification
Figure 4Glands analysis. (a) Glandular patterns characteristic of each cell-type. (b) Automatic gland detection on images from tissue slides and features extracted from glands. The glandular network representing neighboring glands is formed by nodes (blue X) corresponding to detected glands and edges linking neighboring glands (yellow lines)
Multi-class classification performance
Figure 5Class-confusion. Multi-class classification performance. (a) Misclassified tissue samples. The first row corresponds to sample images from the training set. The final row shows the predicted class labels. We observe a high range of color, tissue and staining variability in the misclassified samples. (b) Confusion matrix after leave-one-out cross validation. Each column of the matrix represents the predicted class, while each row represents the ground truth class
Binary classification accuracy results
Uncertainty of preparation per carcinoma subtype