| Literature DB >> 35566440 |
Lloyd A Courtenay1, Diego González-Aguilera1, Susana Lagüela1, Susana Del Pozo1, Camilo Ruiz2, Inés Barbero-García1, Concepción Román-Curto3,4, Javier Cañueto3,4,5, Carlos Santos-Durán3, María Esther Cardeñoso-Álvarez3, Mónica Roncero-Riesco3, David Hernández-López6, Diego Guerrero-Sevilla6, Pablo Rodríguez-Gonzalvez7.
Abstract
Non-melanoma skin cancer, and basal cell carcinoma in particular, is one of the most common types of cancer. Although this type of malignancy has lower metastatic rates than other types of skin cancer, its locally destructive nature and the advantages of its timely treatment make early detection vital. The combination of multispectral imaging and artificial intelligence has arisen as a powerful tool for the detection and classification of skin cancer in a non-invasive manner. The present study uses hyperspectral images to discern between healthy and basal cell carcinoma hyperspectral signatures. Upon the combined use of convolutional neural networks, with a final support vector machine activation layer, the present study reaches up to 90% accuracy, with an area under the receiver operating characteristic curve being calculated at 0.9 as well. While the results are promising, future research should build upon a dataset with a larger number of patients.Entities:
Keywords: basal cell carcinoma; computational learning; convolutional neural networks; hyperspectral sensor; support vector machines
Year: 2022 PMID: 35566440 PMCID: PMC9102335 DOI: 10.3390/jcm11092315
Source DB: PubMed Journal: J Clin Med ISSN: 2077-0383 Impact factor: 4.964
A bibliographical summary of the number of scientific publications registered in the arXiv (https://arxiv.org/) and Science Direct (https://www.sciencedirect.com/) databases presenting the terms “machine learning” (ML) and “deep learning” (DL) in relation with different types of skin cancer (consulted 1 July 2021). Searches considered the appearance of these terms in either the abstract, title, or keywords.
| arXiv | Science Direct | |||
|---|---|---|---|---|
| ML | DL | ML | DL | |
| Skin Cancer | 47 | 56 | 43 | 71 |
| Non-Melanoma Skin Cancer (NMSC) | 2 | 3 | 5 | 5 |
| Melanoma | 13 | 15 | 75 | 78 |
| Cutaneous Squamous Cell Carcinoma (SCC/cSCC) | 3 | 3 | 5 | 2 |
| Basal Cell Carcinoma (BCC) | 6 | 3 | 5 | 9 |
Figure 1Examples of the hyperspectral signatures and images of healthy skin and basal cell carcinoma tumors.
Figure 2Figurative schematic representing the architecture of the 1D Inception modules used in the present study. Convolutional filters are described by [N° filters, receptive field (rows × columns)]. Batch Norm. indicates batch normalization, while activation layers depend on the configuration of the algorithm at the time of training.
Figure 3Graphical representation of the rectified linear unit (ReLU) and the self-gated rectified (Swish) activation functions (f(x)), alongside their first (f′(x)) and second (f″(x)) derivatives.
Description of the final model architecture used for the supervised classification of hyperspectral signatures. The 1D Inception module blocks are constructed following the architecture presented in Figure 2.
| Convolutional Neural Support Vector Machine | |
|---|---|
| Input: 1 × 94 Vector Hyperspectral Signature | |
| 1D Inception Module | |
| Concatenation | |
| 1D Inception Module | |
| Concatenation | |
| Flattening | |
| Dropout | |
| Dense | |
| Dropout | |
| Dense | |
| Dropout | |
| Dense | |
| Dropout | |
| Dense | |
| Radial Kernel Support Vector Machine Activation | |
|
| |
Figure 4Receiver operating characteristic curves alongside their calculated area under curve (AUC) statistics for the different support vector machine activations used. CNN = base convolutional neural network without support vector machine activation. CNSVM = convolutional support vector machine.
Algorithm performance on test sets. AUC = area under curve. MSE = mean squared error. ReLU = rectified linear unit. SGD = stochastic gradient descent.
| Swish and Adam | ReLU and Adam | Swish and SGD | ReLU and SGD | |
|---|---|---|---|---|
| Accuracy | 0.90 | 0.82 | 0.90 | 0.91 |
| Sensitivity | 0.85 | 0.71 | 0.89 | 0.89 |
| Specificity | 0.94 | 0.93 | 0.92 | 0.92 |
| AUC | 0.90 | 0.82 | 0.90 | 0.91 |
| Kappa | 0.79 | 0.64 | 0.81 | 0.81 |
| MSE | 0.034 | 0.078 | 0.029 | 0.035 |
Figure 5Radar plots comparing performance metrics of each of the configurations tried and tested. AUC = area under curve. ReLU = rectified linear unit. SGD = stochastic gradient descent. The red line at 0.8 marks a suitable threshold defining an optimal computational learning model.
Model training time (seconds per epoch) using different computer systems as well as specifying the number of CPUs and GPUs made available to Tensorflow during training.
| Computer | No. CPUs | No. GPUs | Seconds/Epoch |
|---|---|---|---|
| Personal Laptop | 4 | 0 | 5.94 |
| Desktop Computer | 4 | 0 | 4.75 |
| SCAYLE | 4 | 0 | 5.36 |
| SCAYLE | 10 | 0 | 2.68 |
| SCAYLE | 18 | 0 | 1.86 |
| SCAYLE | 4 | 1 | 0.25 |
| SCAYLE | 18 | 1 | 0.20 |
Figure 6Preliminary examples of (A) good and (B) poor image segmentation using CNSVMs for the classification of each pixel. (A) Examples of BCC tumors found on the forehead of a male patient and shoulder of a female patient. (B) Examples of BCC tumors found in the crease between the cheek and nostril of two female patients. Due to patient anonymity, images have been cropped to avoid revealing any distinguishing features.