| Literature DB >> 34806115 |
Frank G Preston1, Yanda Meng1, Jamie Burgess2, Maryam Ferdousi3, Shazli Azmi3, Ioannis N Petropoulos4, Stephen Kaye1, Rayaz A Malik4, Yalin Zheng5,6, Uazman Alam7,8.
Abstract
AIMS/HYPOTHESIS: We aimed to develop an artificial intelligence (AI)-based deep learning algorithm (DLA) applying attribution methods without image segmentation to corneal confocal microscopy images and to accurately classify peripheral neuropathy (or lack of).Entities:
Keywords: Artificial intelligence; Convolutional neural network; Corneal confocal microscopy; Deep learning algorithm; Diabetic neuropathy; Image segmentation; Ophthalmic imaging; Small nerve fibres
Mesh:
Year: 2021 PMID: 34806115 PMCID: PMC8803718 DOI: 10.1007/s00125-021-05617-x
Source DB: PubMed Journal: Diabetologia ISSN: 0012-186X Impact factor: 10.122
Fig. 1Flowchart of participant groups and clinical characteristics within HV participants, participants with no peripheral neuropathy and participants with peripheral neuropathy. Data are mean ± SD for age, diabetes duration, CNFD, CNBD, CNFL, VPT, SAmp, SNCV, PAmp and PNCV. Data are median (interquartile range) for NSP and NDS. People with confirmed peripheral neuropathy had greater neuropathic deficits with more signs (NDS) and symptoms (NSP), higher VPT and lower CNFD, CNFL, CNBD, SNCV, PNCV, SAmp and PAmp. People with peripheral neuropathy were older and those with diabetes had a longer duration of disease. CNBD, corneal nerve branch density; CNFD, corneal nerve fibre density; NDS, neuropathy disability score (score out of 10); NSP, neuropathy symptom profile (score out of 38); SAmp, sural nerve amplitude; SNCV, sural nerve conduction velocity; PAmp, peroneal nerve amplitude; PNCV, peroneal nerve conduction velocity; VPT, vibration perception threshold (score out of 50)
Fig. 2Diagram of the modified ResNet-50 architecture. Each pink rectangle corresponds to a convolutional layer, with the filter size given within. Each purple rectangle corresponds to a pooling layer, either maximum pool or global average pool. Each green rectangle corresponds to a convolution block. Each blue rectangle corresponds to an identity block. Each black rectangle corresponds to a dense layer. Each red rectangle corresponds to a dropout layer (dropout = 0.6). Avg, average; Conv, convolutional; Max, maximum; ReLU, rectified linear unit
Confusion matrix report from modified ResNet-50 in HV, PN− and PN+
| True class | Predicted class | ||
|---|---|---|---|
| HV | PN− | PN+ | |
| HV | 15 | 0 | 0 |
| PN− | 2 | 11 | 0 |
| PN+ | 1 | 1 | 10 |
Classification report from modified ResNet-50 in HV, PN− and PN+
| Class | Recall (Sensitivity) | Precision | |
|---|---|---|---|
| HV | 1.0 (1.0, 1.0) | 0.83 (0.65, 1.0) | 0.91 (0.79, 1.0) |
| PN− | 0.85 (0.62, 1.0) | 0.92 (0.73, 1.0) | 0.88 (0.71, 1.0) |
| PN+ | 0.83 (0.58, 1.0) | 1.0 (1.0, 1.0) | 0.91 (0.74, 1.0) |
Note: 95% CIs are given in brackets
Fig. 3Attribution map results from ResNet-50. Example images from correctly predicted HV (a, b), PN– (c, d) and PN+ (e, f). First row, original images; second row, Grad-CAM images; third row, Guided Grad-CAM images; and fourth row, occlusion sensitivity images