| Literature DB >> 35480147 |
Puneet Thapar1, Manik Rakhra1, Gerardo Cazzato2, Md Shamim Hossain3.
Abstract
Skin cancer is one of the most common diseases that can be initially detected by visual observation and further with the help of dermoscopic analysis and other tests. As at an initial stage, visual observation gives the opportunity of utilizing artificial intelligence to intercept the different skin images, so several skin lesion classification methods using deep learning based on convolution neural network (CNN) and annotated skin photos exhibit improved results. In this respect, the paper presents a reliable approach for diagnosing skin cancer utilizing dermoscopy images in order to improve health care professionals' visual perception and diagnostic abilities to discriminate benign from malignant lesions. The swarm intelligence (SI) algorithms were used for skin lesion region of interest (RoI) segmentation from dermoscopy images, and the speeded-up robust features (SURF) was used for feature extraction of the RoI marked as the best segmentation result obtained using the Grasshopper Optimization Algorithm (GOA). The skin lesions are classified into two groups using CNN against three data sets, namely, ISIC-2017, ISIC-2018, and PH-2 data sets. The proposed segmentation and classification techniques' results are assessed in terms of classification accuracy, sensitivity, specificity, F-measure, precision, MCC, dice coefficient, and Jaccard index, with an average classification accuracy of 98.42 percent, precision of 97.73 percent, and MCC of 0.9704 percent. In every performance measure, our suggested strategy exceeds previous work.Entities:
Mesh:
Year: 2022 PMID: 35480147 PMCID: PMC9038388 DOI: 10.1155/2022/1709842
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 3.822
Skin lesion segmentation approaches.
| Source | Classes | Methods used | Contributions and performance measure | Limitations |
|---|---|---|---|---|
| Yuan et al. [ | 2, Skin lesion and surrounding skin | 19-layer deep convolutional neural networks (Deep-CNN) with jaccard distance | It reduced the requirement of data rebalancing in case of unbalanced numbers of foreground and background pixels that were majorly observed in the case of binary medical image segmentation | Usually, more training samples are required by a deeper network to minimize the overfitting. |
| Al-Masni et al. [ | 3, Benign, Melanoma and Seborrheic Keratosis | Full-resolution convolutional networks (FrCN) | The overall segmentation accuracy of 94.03% and 95.08% was exhibited for the ISBI 2017 and PH2 data sets, respectively. | Although this FrCN segmentation method outperforms previous deep learning approaches, it needs improvement, particularly in terms of sensitivity. |
| Mirikharaji et al. [ | 2, skin lesion and surrounding tissue | Introduced a new loss term that encoded the star-shape prior for the training of fully convolutional network (FCN) | It resulted in highly accurate and conceivable skin lesion segmentation while being computationally less expensive as compared to other energy minimization techniques. | However, the star shape enhanced results for several target objects in the past, one limiting condition of the Veksler technique and its modifications have that the centre of foreground objects be known. |
| Filali and Belkadi [ | 2, melanoma and nonmelanoma | Multiscale contrast-based algorithm followed by graph cut refinement | The overall segmentation accuracy reaches 97.34% with 89.31% sensitivity. | Because this approach has mostly dependent on regional contrast and background directions, it may hinder lesion areas from being effectively segmented with their surrounding areas as well as lesion areas that have highly similar to the background template. Some skin portions that contrast visually with the majority of the background can be returned as lesion sites as well. |
| Dash et al. [ | 2, skin lesion and surrounding tissue | Combination of SI with K-means and fuzzy C-means (FCMs) clustering | It was concluded that seeker optimization (SO) demonstrated better lesion detection accuracy ranging from 89.42% to 90.89% as compare to ant colony optimization (ACO), PSO, and artificial bee colony (ABC). | Only 2D color space for enhanced accuracy including higher processing speed has requires in this presented approach which limits its 3D RGB color space. |
| Garg and Balkrishan [ | 2, skin lesion and surrounding tissue | K-means in combination with firefly algorithm | K-means with FFA outperformed the traditional K-means and K-means with PSO in terms of lesion segmentation accuracy. | Focused on only two steps; preprocessing and segmentation for automatically recognition of skin lesion. Do not focus on classification along with different feature extraction techniques that based on texture, color, and shape. |
| Abd et al. [ | 2, skin lesion and surrounding tissue | Contrast was improved at preprocessing stage followed by feature optimization using ABC | The prototype proved to be very effective for accurate boundary detection of skin lesions. | Do not focus on classification along with different feature extraction techniques that based on texture, color, and shape. |
Skin lesion classification approaches.
| Reference | Classes | Preprocessing and segmentation | Methods used | Contributions and performance measure | Limitations |
|---|---|---|---|---|---|
| Ozkan and Koklu [ | 3, Normal, abnormal, and melanoma. | ABCDE rule | ANN, SVM, KNN, and decision Tree | Correct classification of 92.50% for ANN, 89.50% for SVM, 82.00% for KNN, and 90.00% for DT were achieved. | Do not focus on multiple image texture organization |
| Thompson and Jeyakumar [ | 4, Homogenous patterns, reticular patterns, globular patterns, and multicomponent patterns of skin lesion | Lab color space and SURF | ANN, multiclass SVM, and KNN | Best results were obtained using ANN with a classification accuracy of 86.37%, sensitivity of 86.52%, and specificity of 96.42%. | Only the texture feature of the patterns has used in this work and not produced efficient outcome to achieve this color and geometric features needs to be considered |
| Abbas and Celebi [ | 2, Benign and malignant | Multilayered architecture using visual features | Deep neural network (DNN) | The DermoDeep demonstrated sensitivity and specificity of 93% and 95% with an AUC of 0.96. | Only focused on dermoscopic images for automatic lesion classification that have not applied for any domain of images; Industrial, MRI, satellite, and CT images |
| Sikkandar et al. [ | 7, Angioma, Nevus, lentigo NOS, solar lentigo, melanoma, seborrheic keratosis, and BCC | Top hat filter and inpainting technique for preprocessing followed by GrabCut algorithm for segmentation | adaptive neuro-fuzzy classifier (ANFC) | The highest accuracy of 97.91% was observed with 93.4% sensitivity and 98.7% specificity. | This segmentation-based classification model for skin lesion only used Inception v4 to enhance the performance; other deep learning models need to be utilized. |
| Almaraz-Damian et al. [ | 2, Benign and malignant | ABCD rule | Deep learning CNN | The highest accuracy of 92.40% with specificity, precision, F-score, and MCC of 90%, 92.08%, 89.16%, and 0.795 was achieved. | In comparison to the use of a fully in-depth learning approach that is extremely computationally expensive to train. |
| Ali et al. [ | 2, Benign and malignant | Filter or kernel to remove noise and artifacts; and data augmentation. | Deep learning CNN with the sigmoid output activation layer | On the HAM10000 dataset the highest 93.16% of training and 91.93% of testing accuracy were obtained, respectively. | In comparison to other pretrained models the proposed DCNN takes less training time of about 9-10 sec per epoch. |
Figure 1Sample dermoscopic images (a) ISIC-2018 data set, (b) PH-2 data set, and (c) ISIC-2017 data set.
Figure 2Process of automatic skin lesion segment and intelligent classification model.
Algorithm 1HR-IQE
Figure 3Preprocessing results (a) original image and (b) preprocessed Image.
Entropy of original image and processed image.
| Mask variation percentage | Entropy original | Entropy processed | % difference |
|---|---|---|---|
| 10 | 7.173726 | 6.128945 | 14.564 |
| 12 | 7.329836 | 6.591323 | 10.07543 |
| 15 | 7.040048 | 6.020194 | 14.48645 |
| 20 | 7.867283 | 6.156654 | 21.74358 |
| 25 | 7.853434 | 6.655051 | 15.25936 |
| 27 | 7.014425 | 6.76324 | 3.580981 |
| 30 | 7.064502 | 6.147432 | 12.98137 |
Figure 4Maximum attainable entropy based on the used data set.
Figure 5Types of kernel functions in SVM [27].
List of kernel functions.
| Kernel name | Description | Equation |
|---|---|---|
| Polynomial kernel | It represents the correlation between the features of data during the training of SVM in the form of a polynomial variable. |
|
| Linear Kernel | It is applicable for the linear separation of data, which is separated by a single line. Training is faster compared to other kernels |
|
| Radial Basis Function | It is used in SVM to separate two classes |
|
| Sigmoid Kernel | It comes from the neural network structure where it is used as an activation function. In SVM, it works like a two layer structure of the neural network. | ( |
| Gaussian Kernel | It is the extended form of the RBF kernel. |
|
Algorithm 2Improved K-means using GOA
Algorithm 3Feature extraction using SURF descriptor
Algorithm 4Feature Selection using GOA
Figure 6Architecture of CNN for the skin cancer classification model.
Algorithm 5CNN
Data set description for automatic skin lesion segment and intelligent classification model.
| Class | ISIC-2018 | PH-2 | ISIC-2017 | |||
|---|---|---|---|---|---|---|
| Training | Testing | Training | Testing | Training | Testing | |
| Melanoma | 600 | 1000 | 400 | 600 | 600 | 1000 |
| Nonmelanoma | 600 | 1000 | 400 | 600 | 600 | 1000 |
Figure 7Data set descriptions for the skin cancer classification model.
Figure 8K-means pixels mixing problems.
Comparative analysis of K-means with PSO, FFA, and GOA based on segmentation accuracy.
| No. of images | K-means | K-means with PSO | K-means with FFA | K-means with GOA |
|---|---|---|---|---|
| 100 | 61.23 | 81.47 | 85.67 | 95.69 |
| 200 | 71.84 | 85.17 | 84.62 | 96.47 |
| 300 | 76.24 | 88.64 | 85.72 | 97.89 |
| 400 | 65.48 | 90.17 | 86.74 | 98.37 |
| 500 | 69.37 | 89.74 | 88.74 | 99.28 |
| 600 | 71.84 | 87.98 | 89.89 | 98.24 |
| 700 | 75.84 | 88.45 | 91.86 | 99.36 |
| 800 | 76.42 | 92.87 | 92.47 | 99.87 |
| 900 | 82.68 | 93.41 | 95.85 | 99.54 |
| 1000 | 87.28 | 95.87 | 98.87 | 99.76 |
Figure 9Segmentation accuracy comparative analysis.
Figure 10Segmentation results (a) K-means, (b) K-means with PSO, (c) K-means with FFA, and (d) K-means with GOA.
Performance evaluation of model using ISIC-2018 data set.
| Samples | Accuracy (%) | Sensitivity | F-measure | Precision | MCC | Dice | Jaccard | Specificity |
|---|---|---|---|---|---|---|---|---|
| 100 | 97.845 | 0.9641 | 0.9647 | 0.9654 | 0.9546 | 0.9523 | 0.9222 | 0.9861 |
| 200 | 97.542 | 0.9594 | 0.9633 | 0.9674 | 0.9378 | 0.9377 | 0.9348 | 0.9849 |
| 300 | 97.543 | 0.9751 | 0.9648 | 0.9548 | 0.9459 | 0.9459 | 0.9469 | 0.9927 |
| 400 | 98.003 | 0.9645 | 0.9699 | 0.9754 | 0.9046 | 0.9292 | 0.9419 | 0.9939 |
| 500 | 98.134 | 0.9562 | 0.9680 | 0.9801 | 0.9496 | 0.9507 | 0.9145 | 0.9875 |
| 600 | 98.184 | 0.9846 | 0.9829 | 0.9813 | 0.9539 | 0.9424 | 0.9135 | 0.9918 |
| 700 | 98.576 | 0.9674 | 0.9752 | 0.9832 | 0.9632 | 0.9583 | 0.9407 | 0.9943 |
| 800 | 98.654 | 0.9654 | 0.9775 | 0.9901 | 0.9726 | 0.9672 | 0.9265 | 0.9954 |
| 900 | 99.246 | 0.9723 | 0.9851 | 0.9982 | 0.9861 | 0.9566 | 0.9323 | 0.9958 |
| 1000 | 99.545 | 0.9845 | 0.9911 | 0.9979 | 0.9939 | 0.9647 | 0.9411 | 0.9963 |
| Average |
|
|
|
|
|
|
|
|
Bold shows the average for the column.
Performance evaluation of model using PH-2 data set.
| Samples | Accuracy (%) | Sensitivity | F-measure | Precision | MCC | Dice | Jaccard | Specificity |
|---|---|---|---|---|---|---|---|---|
| 100 | 98.425 | 0.9742 | 0.9580 | 0.9425 | 0.9814 | 0.9138 | 0.9108 | 0.9847 |
| 200 | 98.124 | 0.9624 | 0.9640 | 0.9658 | 0.9463 | 0.9407 | 0.9184 | 0.9865 |
| 300 | 98.475 | 0.9364 | 0.9521 | 0.9684 | 0.9255 | 0.9575 | 0.9183 | 0.9874 |
| 400 | 97.612 | 0.9564 | 0.9672 | 0.9784 | 0.9165 | 0.9198 | 0.9083 | 0.9832 |
| 500 | 97.451 | 0.9485 | 0.9662 | 0.9847 | 0.9347 | 0.9215 | 0.9105 | 0.9845 |
| 600 | 97.986 | 0.9684 | 0.9773 | 0.9865 | 0.9436 | 0.9311 | 0.9234 | 0.9845 |
| Average | 98.012 | 0.9577 | 0.9642 | 0.9710 | 0.9413 | 0.9307 | 0.9149 | 0.9851 |
Performance evaluation of model using ISIC-2017 data set.
| Samples | Accuracy (%) | Sensitivity | F-measure | Precision | MCC | Dice | Jaccard | Specificity |
|---|---|---|---|---|---|---|---|---|
| 100 | 98.156 | 0.9354 | 0.9438 | 0.9524 | 0.9401 | 0.9478 | 0.9284 | 0.9822 |
| 200 | 98.248 | 0.9345 | 0.9559 | 0.9784 | 0.9433 | 0.9527 | 0.9384 | 0.9873 |
| 300 | 98.102 | 0.9485 | 0.9570 | 0.9658 | 0.9575 | 0.9554 | 0.9185 | 0.9802 |
| 400 | 97.845 | 0.9565 | 0.9624 | 0.9684 | 0.9625 | 0.9598 | 0.9383 | 0.9817 |
| 500 | 98.014 | 0.9745 | 0.9723 | 0.9702 | 0.9787 | 0.9615 | 0.9105 | 0.9853 |
| 600 | 98.341 | 0.9641 | 0.9712 | 0.9785 | 0.9746 | 0.9311 | 0.9261 | 0.9881 |
| 700 | 98.654 | 0.9768 | 0.9821 | 0.9875 | 0.9758 | 0.9648 | 0.9464 | 0.9901 |
| 800 | 98.813 | 0.9561 | 0.9702 | 0.9848 | 0.9874 | 0.9612 | 0.9292 | 0.9924 |
| 900 | 98.874 | 0.9842 | 0.9871 | 0.9901 | 0.9913 | 0.9823 | 0.9484 | 0.9894 |
| 1000 | 99.243 | 0.9825 | 0.9873 | 0.9923 | 0.9928 | 0.9781 | 0.9683 | 0.9954 |
| Average |
|
|
|
|
|
|
|
|
Bold shows the average for the column.
Figure 11Accuracy analysis.
Figure 12Sensitivity analysis.
Figure 13Precision analysis.
Figure 14F-measure analysis.
Figure 15Specificity analysis.
Figure 16MCC analysis.
Figure 17Dice coefficient analysis.
Figure 18Jaccard analysis.
Figure 19Comparative analysis with the existing work.