| Literature DB >> 33937410 |
Qilin Sun1, Chao Huang2, Minjie Chen3, Hui Xu1, Yali Yang1.
Abstract
In this paper, we describe our method for skin lesion classification. The goal is to classify skin lesions based on dermoscopic images to several diagnoses' classes presented in the HAM (Human Against Machine) dataset: melanoma (MEL), melanocytic nevus (NV), basal cell carcinoma (BCC), actinic keratosis (AK), benign keratosis (BKL), dermatofibroma (DF), and vascular lesion (VASC). We propose a simplified solution which has a better accuracy than previous methods, but only predicted on a single model that is practical for a real-world scenario. Our results show that using a network with additional metadata as input achieves a better classification performance. This metadata includes both the patient information and the extra information during the data augmentation process. On the international skin imaging collaboration (ISIC) 2018 skin lesion classification challenge test set, our algorithm yields a balanced multiclass accuracy of 88.7% on a single model and 89.5% for the embedding solution, which makes it the currently first ranked algorithm on the live leaderboard. To improve the inference accuracy. Test time augmentation (TTA) is applied. We also demonstrate how Grad-CAM is applied in TTA. Therefore, TTA and Grad-CAM can be integrated in heat map generation, which can be very helpful to assist the clinician for diagnosis.Entities:
Year: 2021 PMID: 33937410 PMCID: PMC8055397 DOI: 10.1155/2021/6673852
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.411
Figure 1Architecture of the proposed CNN model with metadata.
Figure 2The metrics of all lesion type reported on ISIC live leaderboard: (a) our single model on ISIC18, (b) our embedding model on ISIC18, (c) our single model on ISIC19, and (d) our embedding model on ISIC19.
Results of ISIC 2018 challenge winners from the legacy leaderboard (rows 1-3) and our proposed models (rows 4–6). Among the 16,888 images, there are 15,316 images from ISIC19 dataset, 170 images from the MED-NODE dataset, 533 from the seven-point dataset, 120 from the PH2 dataset, and the remaining data are from our own collected data.
| Team/authors | Extra images | BMCA (%) | Sensitivity (%) | Specificity (%) | AUC |
|---|---|---|---|---|---|
| Nozdryn et al. | 37,807 | 88.5 | 83.3 | 98.6 | 0.983 |
| Gassert et al. [ | 13,475 | 85.6 | 80.9 | 98.4 | 0.987 |
| MSM-CNN [ | 2,912 | 86.2 | 85.6 | 97.9 | 0.987 |
| Our single model (FL) | 16,888 | 88.3 | 76.1 | 99.3 | 0.974 |
| Our single model (CE) | 16,888 |
|
|
|
|
| Our ensemble model | 16,888 |
|
|
|
|
FL: focal loss; CE: cross-entropy loss.
Results of ISIC 2019 challenge winners from the legacy leaderboard (rows 1-3) and our proposed models (rows 4–6). Among the 1,572 images, there are 170 images from the MED-NODE dataset, 533 from the seven-point dataset, 120 from the PH2 dataset, and the remained data are from our own collected data.
| Team/authors | Extra images | BMCA (%) | Sensitivity (%) | Specificity (%) | AUC |
|---|---|---|---|---|---|
| Gassert et al. [ | Unknown | 63.6 | 50.7 | 97.7 | 0.923 |
| Cancerless | Unknown | 63.8 | 53.1 | 97.4 | 0.913 |
| ForCure | Unknown | 64.8 | 53.4 | 97.4 | 0.914 |
| Our single model (FL) | 1,572 | 63.9 | 48.8 | 97.9 | 0.899 |
| Our single model (CE) | 1,572 |
|
|
|
|
| Our ensemble model | 1,572 |
|
|
|
|
FL: focal loss; CE: cross-entropy loss.
Figure 3Heat map visualization using Grad-CAM with TTA. The figures are from the ISIC competition (https://challenge2019.isic-archive.com/data.html).