| Literature DB >> 35773774 |
Mengkun Chen1, Xu Feng1, Matthew C Fox2, Jason S Reichenberg2, Fabiana C P S Lopes2, Katherine R Sebastian2, Mia K Markey1,3, James W Tunnell1.
Abstract
SIGNIFICANCE: Raman spectroscopy (RS) provides an automated approach for assisting Mohs micrographic surgery for skin cancer diagnosis; however, the specificity of RS is limited by the high spectral similarity between tumors and normal tissues structures. Reflectance confocal microscopy (RCM) provides morphological and cytological details by which many features of epidermis and hair follicles can be readily identified. Combining RS with deep-learning-aided RCM has the potential to improve the diagnostic accuracy of RS in an automated fashion, without requiring additional input from the clinician. AIM: The aim of this study is to improve the specificity of RS for detecting basal cell carcinoma (BCC) using an artificial neural network trained on RCM images to identify false positive normal skin structures (hair follicles and epidermis). APPROACH: Our approach was to build a two-step classification model. In the first step, a Raman biophysical model that was used in prior work classified BCC tumors from normal tissue structures with high sensitivity. In the second step, 191 RCM images were collected from the same site as the Raman data and served as inputs for two ResNet50 networks. The networks selected the hair structure and epidermis images, respectively, within all images corresponding to the positive predictions of the Raman biophysical model with high specificity. The specificity of the BCC biophysical model was improved by moving the Raman spectra corresponding to these selected images from false positive to true negative.Entities:
Keywords: Raman spectroscopy; basal cell carcinoma; deep learning; reflectance confocal microscopy
Mesh:
Year: 2022 PMID: 35773774 PMCID: PMC9243521 DOI: 10.1117/1.JBO.27.6.065004
Source DB: PubMed Journal: J Biomed Opt ISSN: 1083-3668 Impact factor: 3.758
Fig. 1Illustration of prior experiment (Feng et al.’s 2019 study) where Raman spectra and RCM images were acquired from Mohs micrographic sections. The red circle is BCC tumor. The black circle is normal tissue structures including hair structure and epidermis. The yellow square is regions that were assessed by Raman scanning.
Fig. 2Description of the RCM image sets used for the train and task groups. Images in box (a) were from Feng et al.’s 2019 study. Images in box (b) were from Feng et al.’s 2020 study. Images in box (c) were the train group used for our models. Images in box (d) were the task group. Images in box (e) were removed because these two images contained both hair structure and BCC. Images in box (f) were removed for having abnormal hair structures. The intersections of the colored circles indicate that more than one structure is present in one image.
Fig. 3Training process details. (a) Training and validation loss and accuracy for the hair structure model. (b) ROC curve for the hair structure model on the test set. (c) Training and validation loss and accuracy for the epidermis model. (d) ROC curve for the epidermis model on the test set. The red circles in (b) and (d) are the operating points corresponding to the thresholds selected for use in subsequent model validation on the task group data.
Fig. 4(a) Illustration of the RCM images within the task group that were identified as epidermis by the epidermis model and as hair structures by the hair structure model. (b) The confusion matrix for the Raman analysis changes after applying the models to identify epidermis and hair structures. The numbers in the confusion matrix are the numbers of spectra.