| Literature DB >> 35161878 |
Ranpreet Kaur1, Hamid GholamHosseini1, Roopak Sinha1, Maria Lindén2.
Abstract
Automatic melanoma detection from dermoscopic skin samples is a very challenging task. However, using a deep learning approach as a machine vision tool can overcome some challenges. This research proposes an automated melanoma classifier based on a deep convolutional neural network (DCNN) to accurately classify malignant vs. benign melanoma. The structure of the DCNN is carefully designed by organizing many layers that are responsible for extracting low to high-level features of the skin images in a unique fashion. Other vital criteria in the design of DCNN are the selection of multiple filters and their sizes, employing proper deep learning layers, choosing the depth of the network, and optimizing hyperparameters. The primary objective is to propose a lightweight and less complex DCNN than other state-of-the-art methods to classify melanoma skin cancer with high efficiency. For this study, dermoscopic images containing different cancer samples were obtained from the International Skin Imaging Collaboration datastores (ISIC 2016, ISIC2017, and ISIC 2020). We evaluated the model based on accuracy, precision, recall, specificity, and F1-score. The proposed DCNN classifier achieved accuracies of 81.41%, 88.23%, and 90.42% on the ISIC 2016, 2017, and 2020 datasets, respectively, demonstrating high performance compared with the other state-of-the-art networks. Therefore, this proposed approach could provide a less complex and advanced framework for automating the melanoma diagnostic process and expediting the identification process to save a life.Entities:
Keywords: classification; deep convolutional neural networks; melanoma; skin cancer
Mesh:
Year: 2022 PMID: 35161878 PMCID: PMC8838143 DOI: 10.3390/s22031134
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Different types of skin lesions: (a) MEL, (b) BEN, (c) NV, and (d) SK.
The ISIC 2016 data distribution among training, validation, and test sets.
| Classes | Training Samples | Augmented Training | Validation Samples | Test Samples | Total Samples |
|---|---|---|---|---|---|
| 70% | Samples | 10% | 20% | 100% | |
| MEL | 512 | 692 | 98 | 146 | 756 |
| BEN | 692 | 692 | 73 | 198 | 963 |
| Total | 1200 | 1384 | 171 | 344 | 1719 |
The ISIC 2017 data distribution among training, validation, and test sets.
| Classes | Training Samples | Augmented Training | Validation Samples | Test Samples | Total Samples |
|---|---|---|---|---|---|
| 70% | Samples | 10% | 20% | 100% | |
| MEL | 1214 | 1708 | 173 | 347 | 1732 |
| BEN | 1708 | 1708 | 244 | 488 | 2440 |
| Total | 2922 | 3416 | 417 | 835 | 4172 |
The ISIC 2020 data distribution among training, validation, and test sets.
| Classes | Training Samples | Augmented Training | Validation Samples | Test Samples | Total Samples |
|---|---|---|---|---|---|
| 70% | Samples | 10% | 20% | 100% | |
| MEL | 3479 | 3570 | 497 | 994 | 4970 |
| BEN | 3570 | 3570 | 510 | 1020 | 5100 |
| Total | 7049 | 7140 | 1007 | 2014 | 10070 |
Figure 2Augmented data samples using translation, rotation, and scaling.
Figure 3The design of the proposed network, LCNet.
Hyperparameter selected for the proposed LCNet.
| Learning Algorithm | Learning Rate | Mini-Batch Size | Epochs | Activation Function | Data Augmentation | Momentum | Regularization |
|---|---|---|---|---|---|---|---|
| SGDM | 0.001 | 32 | 100 | LeakyReLU | Random oversampling, rotation, translation, and scaling | 0.99 | 0.0005 |
Impact of data oversampling on the performance of LCNet.
| Approach | ISIC 2016 | ISIC 2017 | ISIC 2020 | ||||||
|---|---|---|---|---|---|---|---|---|---|
| ACC | PRE | REC | ACC | PRE | REC | ACC | PRE | REC | |
| Without oversampling | 0.773 | 0.779 | 0.765 | 0.607 | 0.529 | 0.518 | 0.886 | 0.874 | 0.896 |
| With oversampling | 0.814 | 0.818 | 0.813 | 0.882 | 0.785 | 0.878 | 0.904 | 0.904 | 0.903 |
Figure 4Classification accuracy and loss curves of the LCNet with the number of epochs on the validation set (a) MEL vs. BEN lesion classes ISIC 2016, (b) MEL vs. SK and NV lesion classes ISIC 2017, and (c) MEL vs. BEN lesion classes ISIC 2020.
Figure 5Classification accuracy and loss curves of the LCNet with the number of epochs on the validation set (a) MEL vs. BEN lesion classes ISIC 2016 (b) MEL vs. SK and NV lesion classes ISIC 2017 (c) MEL vs. BEN lesion classes ISIC 2020.
Performance of the LCNet on the adopted datasets.
| ISIC 2016 | ISIC 2017 | ISIC 2020 |
| ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| ACC | PRE | REC | ACC | PRE | REC | ACC | PRE | REC | ACC | PRE | REC |
| 0.814 | 0.818 | 0.813 | 0.882 | 0.785 | 0.878 | 0.904 | 0.904 | 0.903 | 0.760 | 0.678 | 0.753 |
Performance comparison of LCNet with other state-of-the-art methods.
| Methods/Authors | Dataset | ACC% | PRE% | REC% | SPE% | F-Score% | Learnable Parameters (Millions) |
|---|---|---|---|---|---|---|---|
| Al-Masni, M. A. [ | ISIC 2016 | 81.79 | —– | 81.80 | 71.40 | 82.59 | —– |
| Zhang J. [ | 86.28 | 68.10 | —– | —– | —– | —– | |
| Tang P. [ |
| 72.80 | 32.00 |
| —– | —– | |
|
| 81.41 |
|
| 80.83 |
|
| |
| Mahbod, A. [ | ISIC 2017 | 87.70 | —– | 87.26 | 82.18 | —– | 256.7 M |
| Harangi, B. [ | 86.60 | —– | 55.60 | 78.50 | —– | 267.5 M | |
| Li, Y. et al. [ | 85.70 | 72.9 | 49.00 |
| —– | —– | |
| Al-Masni, M. A. [ | 81.34 | 75.67 | 77.66 | 75.72 | —– | 54.35 M | |
| Iqbal, I. [ | 93.25 | 93.97 | 93.25 | 90.64 | 93.47 | 4.8M | |
|
|
|
|
| 88.86 |
|
| |
| Kwasigroch, A. [ | ISIC 2020 | 77.00 | —– | —– | —– | —– | 7.18 M |
|
|
|
|
|
|
|
|
A comparison between proposed LCNet with baseline CNN models on the ISIC 2016, 2017, and 2020 datasets.
| Approach | ISIC 2016 | ISIC 2017 | ISIC 2020 | ||||||
|---|---|---|---|---|---|---|---|---|---|
| ACC | PRE | REC | ACC | PRE | REC | ACC | PRE | REC | |
| ResNet18 | 0.809 | 0.789 | 0.809 | 0.750 | 0.640 | 0.571 | 0.908 | 0.898 | 0.888 |
| Inceptionv3 | 0.799 | 0.809 | 0.811 | 0.774 | 0.691 | 0.612 | 0.486 | 0.297 | 0.492 |
| AlexNet | 0.654 | 0.595 | 0.643 | 0.740 | 0.670 | 0.660 | 0.754 | 0.691 | 0.685 |
|
|
|
|
|
|
|
|
|
|
|