| Literature DB >> 35898782 |
Syeda Shamaila Zareen1, Sun Guangmin1, Yu Li1, Mahwish Kundi2, Salman Qadri3, Syed Furqan Qadri4, Mubashir Ahmad5, Ali Haider Khan6.
Abstract
The main purpose of this study is to observe the importance of machine vision (MV) approach for the identification of five types of skin cancers, namely, actinic-keratosis, benign, solar-lentigo, malignant, and nevus. The 1000 (200 × 5) benchmark image datasets of skin cancers are collected from the International Skin Imaging Collaboration (ISIC). The acquired ISIC image datasets were transformed into texture feature dataset that was a combination of first-order histogram and gray level co-occurrence matrix (GLCM) features. For the skin cancer image, a total of 137,400 (229 × 3 x 200) texture features were acquired on three nonover-lapping regions of interest (ROIs). Principal component analysis (PCA) clustering approach was employed for reducing the dimension of feature dataset. Each image acquired twenty most discriminate features based on two different approaches of statistical features such as average correlation coefficient plus probability of error (ACC + POE) and Fisher (Fis). Furthermore, a correlation-based feature selection (CFS) approach was employed for feature reduction, and optimized 12 features were acquired. Furthermore, a classification algorithm naive bayes (NB), Bayes Net (BN), LMT Tree, and multilayer perception (MLP) using 10 K-fold cross-validation approach were employed on optimized feature datasets and the overall accuracy achieved by MLP is 97.1333%.Entities:
Mesh:
Year: 2022 PMID: 35898782 PMCID: PMC9313960 DOI: 10.1155/2022/4942637
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Skin cancer ISIC image dataset. (a) Actinic-keratosis. (b) Benign. (c) Malignant. (d) Nevus. (e) Solar-lentigo.
Figure 2Image segmentation and creating ROI.
Figure 3Proposed framework of skin cancer.
Feature acquisition (ACC + POE) and Fisher (Fis).
| Sr. no. | Feature | Optimized feature |
|---|---|---|
| 1 | ACC + POE |
|
| 2 |
| |
| 3 |
| |
| 4 |
| |
| 5 |
| |
| 6 |
| |
| 7 |
| |
| 8 |
| |
| 9 | Skewness | |
| 10 | Percent 0.01% | |
| 11 | Fis |
|
| 12 |
| |
| 13 |
| |
| 14 |
| |
| 15 |
| |
| 16 |
| |
| 17 |
| |
| 18 |
| |
| 19 |
| |
| 20 |
|
Correlation-based feature selection dataset of optimized feature.
| Sr. no. | Optimized features |
|---|---|
| 1 |
|
| 2 |
|
| 3 |
|
| 4 |
|
| 5 |
|
| 6 |
|
| 7 |
|
| 8 |
|
| 9 |
|
| 10 |
|
| 11 |
|
| 12 |
|
Parameter value of MLP classifiers.
| MLP parameter | Values of parameter |
|---|---|
| Layers of input | 12 |
| Hidden layers | 10 |
| Neurons | 11 |
| Rate of learning | 0.3 |
| Momentum value | 0.2 |
| Threshold validation | 20 |
| Epoch | 500 |
Figure 4Proposed model for the MLP classifier of skin cancer based on multiple features.
Overall accuracy result of three MV classifiers.
| Classifiers | NB | BN | LMT tree | MLP |
|---|---|---|---|---|
| False positive | 0.035 | 0.020 | 0.010 | 0.007 |
| True positive | 0.860 | 0.921 | 0.960 | 0.971 |
| Kappa static | 0.8246 | 0.96 | 0.9496 | 0.9642 |
| M.A.E | 0.0648 | 0.037 | 0.0202 | 0.0162 |
| R.O.C area | 0.995 | 0.998 | 0.988 | 0.995 |
| Time second | 0.05 | 0.85 | 0.47 | 6.6 |
| Instances total | 3000 | 3000 | 3000 | 3000 |
| Miss classifiers rate | 14.0333 | 7.8660 | 4.0333 | 2.8667 |
| O.A (%) results | 85.9667 | 92.1333 | 95.9667 | 97.1333 |
Figure 5Overall accuracy result of four different MV classifiers.
Confusion matrix for MLP classifiers.
| Classes | Actinic-keratosis | Benign | Solar-lentigo | Malignant | Nevus | Total | Accuracy (%) |
|---|---|---|---|---|---|---|---|
| Actinic-keratosis | 573 | 15 | 2 | 2 | 8 | 600 | 95.5 |
| Benign | 16 | 569 | 8 | 7 | 0 | 600 | 94.8333 |
| Solar-lentigo | 2 | 12 | 582 | 4 | 0 | 600 | 97 |
| Malignant | 0 | 7 | 3 | 590 | 0 | 600 | 98.3333 |
| Nevus | 0 | 0 | 0 | 0 | 600 | 600 | 100 |
Figure 6CM overall result accuracy of five types of skin cancer.
Comparison between the proposed and existing approach.
| Link/references | Methodology/technique | Classification | OCA (%) |
|---|---|---|---|
| Kumar et al. [ | R–CNN | Artificial neural network (ANN) | 95 |
| Esteva et al. [ | GoogleNet inspection V-3 | Convolutional neural network | 72.1 ± 0.9 |
| Shena et al. [ | ResNet-34 model | Artificial neural network (ANN) | 78.4 |
| Yasir et al. [ | Computer vision | Artificial neural network (ANN) | 90 |
| Haenssle et al. [ | Google's inception v4 CNN | Deep CNN | 86.6 |
| Kawahara et al. [ | Image-net model | Deep CNN | 40.8% to 91% |
| Proposed technique | First order histogram + gray level co-occurrence matrix | Multi-layer perception (MLP) | 97.1333 |