| Literature DB >> 28931888 |
Marc Aubreville1, Christian Knipfer2,3, Nicolai Oetter3,4, Christian Jaremenko5, Erik Rodner6, Joachim Denzler6, Christopher Bohr7, Helmut Neumann3,8, Florian Stelzle3,4, Andreas Maier5,3.
Abstract
Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).Entities:
Mesh:
Year: 2017 PMID: 28931888 PMCID: PMC5607286 DOI: 10.1038/s41598-017-12320-8
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Left: CLE recording locations. Additionally, the region of the suspected HNSCC was recorded. Right: Division of (resized) image into patches of size 80 × 80 px. Only patches that were inside the image mask and had no artifact labels within them were considered for classification.
Number of images of different regions.
| Class | location | No. total | No. good images | Percentage in final data set |
|---|---|---|---|---|
| normal | alveolar ridge | 2,133 | 1,951 | 24.71% |
| normal | inner labium | 1,327 | 1,317 | 16.68% |
| normal | hard palate | 955 | 811 | 10.27% |
| carcinogenic | various | 6,530 | 3,815 | 48.33% |
Figure 2Overview of the CNN-based patch extraction and classification.
Figure 3Overview of the patch probability fusion approach. Left: Overlapping patches are extracted from the image and classified. Subsequently, the image classification is fused. Right: Examples for color-coded image probability maps.
Figure 4Overview of the transfer learning approach, based on Szegedy’s Inception v3[38], pre-trained on the ImageNet database[39].