| Literature DB >> 36250184 |
Jaisakthi S M1, Mirunalini P2, Chandrabose Aravindan2, Rajagopal Appavu3.
Abstract
A powerful medical decision support system for classifying skin lesions from dermoscopic images is an important tool to prognosis of skin cancer. In the recent years, Deep Convolutional Neural Network (DCNN) have made a significant advancement in detecting skin cancer types from dermoscopic images, in-spite of its fine grained variability in its appearance. The main objective of this research work is to develop a DCNN based model to automatically classify skin cancer types into melanoma and non-melanoma with high accuracy. The datasets used in this work were obtained from the popular challenges ISIC-2019 and ISIC-2020, which have different image resolutions and class imbalance problems. To address these two problems and to achieve high performance in classification we have used EfficientNet architecture based on transfer learning techniques, which learns more complex and fine grained patterns from lesion images by automatically scaling depth, width and resolution of the network. We have augmented our dataset to overcome the class imbalance problem and also used metadata information to improve the classification results. Further to improve the efficiency of the EfficientNet we have used ranger optimizer which considerably reduces the hyper parameter tuning, which is required to achieve state-of-the-art results. We have conducted several experiments using different transferring models and our results proved that EfficientNet variants outperformed in the skin lesion classification tasks when compared with other architectures. The performance of the proposed system was evaluated using Area under the ROC curve (AUC - ROC) and obtained the score of 0.9681 by optimal fine tuning of EfficientNet-B6 with ranger optimizer.Entities:
Keywords: Deep convolutional neural network (DCNN); Dermoscopic images; EfficientNet; Melanoma classification
Year: 2022 PMID: 36250184 PMCID: PMC9554840 DOI: 10.1007/s11042-022-13847-3
Source DB: PubMed Journal: Multimed Tools Appl ISSN: 1380-7501 Impact factor: 2.577
Fig. 1Architecture of EfficientNet-B0
Fig. 2Proposed feature extraction model using EfficientNet-B6
Fig. 3Proposed fine tuning method using EfficientNet-B6
Feature Extraction Technique for different architecture using ISIC 2020 Dataset
| Model Name | Image Size | No. of features∗ | Training AUC | Valida tion AUC | Testing AUC |
|---|---|---|---|---|---|
| Dense Net121 | 256X256X3 | 1024 + 3 | 0.9578 | 0.8512 | 0.8629 |
| Res Net50 | 224X224X3 | 2048 + 3 | 0.9577 | 0.8279 | 0.8381 |
| Inception ResNet V2 | 299X299X3 | 1536 + 3 | 0.9633 | 0.8359 | 0.8825 |
| Efficient Net-B6 | 512X512X3 | 2304 + 3 | 0.9462 | 0.9100 | 0.9174 |
* Image features + contextual features
Fine Tuning method for different architecture with Adam optimizer using ISIC 2020
| Model Name | Image Size | Training AUC | Validation AUC | Testing AUC |
|---|---|---|---|---|
| Dense Net121 | 256X256X3 | 0.6472 | 0.7637 | 0.7414 |
| ResNet50 | 224X224X3 | 0.66875 | 0.7627 | 0.7499 |
| Inception ResNet V2 | 299X299X3 | 0.67913 | 0.7423 | 0.7423 |
| EfficientNet-B6 | 512X512X3 | 0.7052 | 0.7369 | 0.7765 |
Results of fine tuning the EfficinetNet models using ISIC-2020 dataset
| Model Name | AUC for Image Size 256X256 | AUC for Image Size 384X384 | AUC for Image Size 512X512 |
|---|---|---|---|
| EfficeientNet-B0 | 0.6622 | 0.7306 | 0.7732 |
| EfficeientNet-B1 | 0.6507 | 0.7265 | 0.7055 |
| EfficeientNet-B2 | 0.6873 | 0.6615 | 0.7064 |
| EfficeientNet-B3 | 0.6234 | 0.677 | 0.7663 |
| EfficeientNet-B4 | 0.6608 | 0.7655 | 0.7003 |
| EfficeientNet-B5 | 0.732 | 0.7266 | 0.73 |
| EfficeientNet-B6 | 0.7122 | 0.7208 | 0.7765 |
| EfficeientNet-B7 | 0.6615 | 0.7178 | 0.7642 |
Fine Tuning results for EfficientNetB0 - B7 with different image size using ISIC-2019 & ISIC-2020 dataset
| Model Name | AUC for Image Size 256X256 | AUC for Image Size 384X384 | AUC for Image Size 512X512 |
|---|---|---|---|
| EfficeientNet B0 | 0.9221 | 0.9313 | 0.9199 |
| EfficeientNet B1 | 0.9254 | 0.9368 | 0.9202 |
| EfficeientNet B2 | 0.9276 | 0.9313 | 0.9283 |
| EfficeientNet B3 | 0.9296 | 0.9355 | 0.9386 |
| EfficeientNet B4 | 0.9342 | 0.9324 | 0.9368 |
| EfficeientNet B5 | 0.9302 | 0.9374 | 0.9382 |
| EfficeientNet B6 | 0.9445 | 0.9483 | 0.9486 |
| EfficeientNet B7 | 0.9336 | 0.9375 | 0.9465 |
Execution Time for EfficientNetB0 - B7 with different image size using ISIC-2019
| Model Name | Execution Time for 256X256 | Execution Time for 384X384 | Execution Time for 512X512 |
|---|---|---|---|
| EfficeientNet-B0 | 2392.2 | 4456.9s | 7276.6s |
| EfficeientNet-B1 | 2618.5 | 5762.3s | 7710.3s |
| EfficeientNet-B2 | 2825.6 | 6256.4s | 8274.3s |
| EfficeientNet-B3 | 3137.8s | 6713.6s | 9101.0s |
| EfficeientNet-B4 | 3702.0s | 7136.4s | 10761.8s |
| EfficeientNet-B5 | 4646.7s | 7955.2s | 12954.4s |
| EfficeientNet-B6 | 5056.1s | 8655.2s | 13213.5 |
| EfficeientNet-B7 | 6972.0s | 9621.6s | 14895.2 |
Execution Time for EfficientNetB0 - B7 with different image size using ISIC-2020 & ISIC-2020 dataset
| Model Name | Execution Time for 256X256 | Execution Time for 384X384 | Execution Time for 512X512 |
|---|---|---|---|
| EfficeientNet-B0 | 4192.1s | 7463.6s | 11385.1s |
| EfficeientNet-B1 | 4348.5s | 9352.2s | 17045.1s |
| EfficeientNet-B2 | 4523.0s | 9623.4s | 17389.7s |
| EfficeientNet-B3 | 5012.3s | 10856.6s | 19991.5s |
| EfficeientNet-B4 | 5864.6s | 12148.5s | 22040.1s |
| EfficeientNet-B5 | 6772.0s | 14096.2s | 26146.2s |
| EfficeientNet-B6 | 8034.7s | 17644.1s | 30256.5s |
| EfficeientNet-B7 | 9795.2s | 21946.7s | 35178.4s |
Results of Fine Tuning EfficientNet-B6 architecture with various optimizers
| Model Name | Image Size | Training AUC | Validation AUC | Testing AUC |
|---|---|---|---|---|
| EfficientNet-B6+SGD | 384X384X3 | 0.8289 | 0.8865 | 0.8775 |
| EfficientNet-B6+RMSprop | 384X384X3 | 0.6112 | 0.5451 | 0.5370 |
| EfficientNet-B6+Adam | 384X384X3 | 0.9445 | 0.9483 | 0.9486 |
| EfficientNet-B6+RAdam | 384X384X3 | 0.8805 | 0.9202 | 0.9475 |
| EfficientNet-B6+Ranger | 384X384X3 | 0.90984 | 0.9475 | 0.9681 |
Obtained AUC by Fine Tuning different architectures using ranger optimizer
| Model Name | Image Size | Training AUC | Validation AUC | Testing AUC |
|---|---|---|---|---|
| DenseNet121 | 256X256X3 | 0.6470 | 0.7788 | 0.7449 |
| ResNet50 | 224X224X3 | 0.63687 | 0.7656 | 0.7334 |
| Inception ResNet V2 | 299X299X3 | 0.65785 | 0.6611 | 0.7502 |