| Literature DB >> 36079042 |
Abdurrahim Yilmaz1,2, Gulsum Gencoglan3, Rahmetullah Varol1,2, Ali Anil Demircali4, Meysam Keshavarz5, Huseyin Uvet1.
Abstract
Dermoscopy is the visual examination of the skin under a polarized or non-polarized light source. By using dermoscopic equipment, many lesion patterns that are invisible under visible light can be clearly distinguished. Thus, more accurate decisions can be made regarding the treatment of skin lesions. The use of images collected from a dermoscope has both increased the performance of human examiners and allowed the development of deep learning models. The availability of large-scale dermoscopic datasets has allowed the development of deep learning models that can classify skin lesions with high accuracy. However, most dermoscopic datasets contain images that were collected from digital dermoscopic devices, as these devices are frequently used for clinical examination. However, dermatologists also often use non-digital hand-held (optomechanical) dermoscopes. This study presents a dataset consisting of dermoscopic images taken using a mobile phone-attached hand-held dermoscope. Four deep learning models based on the MobileNetV1, MobileNetV2, NASNetMobile, and Xception architectures have been developed to classify eight different lesion types using this dataset. The number of images in the dataset was increased with different data augmentation methods. The models were initialized with weights that were pre-trained on the ImageNet dataset, and then they were further fine-tuned using the presented dataset. The most successful models on the unseen test data, MobileNetV2 and Xception, had performances of 89.18% and 89.64%. The results were evaluated with the 5-fold cross-validation method and compared. Our method allows for automated examination of dermoscopic images taken with mobile phone-attached hand-held dermoscopes.Entities:
Keywords: deep learning; hand-held dermoscope; lightweight architectures; mobile phone; skin cancer
Year: 2022 PMID: 36079042 PMCID: PMC9457478 DOI: 10.3390/jcm11175102
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.964
Figure 1Main scheme of work, starting from data gathering to prediction results.
Figure 2Classification of skin cancer lesions by groups and subgroups. Green background represents benign lesions, and red background represents malignant lesions.
The lesion types, names, class number and dataset size.
| Type | Lesion Name | Class Number | Training-Testing-Total |
|---|---|---|---|
| Non-Melanocytic | Actinic | 1 | 38-10-48 |
| Non-Melanocytic | Vascular | 2 | 160-40-200 |
| Non-Melanocytic | Seborrheic | 3 | 143-36-179 |
| Non-Melanocytic | Dermatofibroma (df) | 4 | 29-7-36 |
| Non-Melanocytic | Basel Cell | 5 | 188-47-235 |
| Non-Melanocytic | Squamous Cell | 6 | 141-35-176 |
| Melanocytic | Melanoma (mel) | 7 | 124-31-155 |
| Melanocytic | Nevus (nv) | 8 | 492-123-615 |
| Total | - | - | 1315-329-1644 |
Figure 3Data sample, data augmentation and output samples with respect to data augmentation settings.
Data augmentation arguments and their range and values.
| Settings | Values |
|---|---|
| Rotation Range | 45 |
| Zoom Range | 0.2 |
| Width Shift Range | 0.2 |
| Height Shift Range | 0.2 |
| Horizontal Flip | True |
| Vertical Flip | True |
Figure 4Overview of transfer learning process. The weights obtained on the ImageNet dataset are transferred to the convolution layers. The weights in the fully connected part are retrained. After optimization, the four deep learning models have two 128 node dense layers and one dropout layer with a 0.2 ratio as fully connected layers.
Metrics and formulas used to measure model performance.
| Metric | Formula |
|---|---|
| Accuracy |
|
| Precision |
|
|
|
Mean values and SDs for weighted metrics of four deep learning models evaluated with 5-fold cross validation.
| Metric | MobileNetV1 | MobileNetV2 | NASNetMobile | Xception |
|---|---|---|---|---|
| Accuracy | 76.96% | 89.18% | 77.21% | 89.64% |
| Precision | 77.94 | 88.13% | 78.04% | 89.99% |
| 77.45% | 87.38% | 77.62% | 89.81% |
Figure 5Samples of correctly classified images with their corresponding probability.
Figure 6Samples of misclassified images, true classes with prediction values of true class and false predicted classes with corresponding false prediction values.
Shows the class accuracies for each of the eight classes, along with their mean percentile performance and SD.
| Lesion | MobileNetV1 | MobileNetV2 | NASNetMobile | Xception |
|---|---|---|---|---|
| ak | 68.00% ( | 80.00% ( | 72.00% ( | 66.00% ( |
| vasc | 80.50% ( | 90.50% ( | 78.50% ( | 91.00% ( |
| sk | 52.78% ( | 67.78% ( | 56.11% ( | 72.78% ( |
| df | 37.14% ( | 68.57% ( | 40.00% ( | 71.43% ( |
| bcc | 65.11% ( | 73.62% ( | 61.70% ( | 73.19% ( |
| scc | 65.14% ( | 89.71% ( | 65.14% ( | 85.71% ( |
| mel | 85.81% ( | 89.03% ( | 85.81% ( | 87.74% ( |
| nv | 91.38% ( | 91.87% ( | 92.52% ( | 91.00% ( |
Specifications of the largest open access skin lesion datasets in the literature and related studies.
| Dataset | Study | Type | Comparison with | Dataset Size | Class Size | Dermatologists |
|---|---|---|---|---|---|---|
| Hybrid 1 * | [ | Clinic | Yes | 129,450 | 9 | 2 |
| Hybrid 2 ** | [ | Clinic | Yes | 19,398 | 12 | 16 |
|
| [ | Dermoscopic | No | 200 | 3 | - |
| ISIC 2016 | [ | Dermoscopic | Yes | 1279 | 3 | 8 |
| ISIC 2017 | [ | Dermoscopic | No | 2750 | 3 | - |
| ISIC 2018 | [ | Dermoscopic | Yes | 10,015 | 7 | 511 |
| ISIC 2019 | [ | Dermoscopic | No | 25,331 | 8 | - |
| ISIC 2020 | [ | Dermoscopic | No | 33,126 | 2 | - |
| Mobile Dermoscopy | Own | Dermoscopic | No | 1644 | 8 | - |
* ISIC Dermoscopic Archive, the Edinburgh Dermofit Library and Stanford Hospital. ** Asan, MED-NODE dataset and atlas site images.