| Literature DB >> 35328185 |
Issei Shinohara1, Atsuyuki Inui1, Yutaka Mifune1, Hanako Nishimoto1, Kohei Yamaura1, Shintaro Mukohara1, Tomoya Yoshikawa1, Tatsuo Kato1, Takahiro Furukawa1, Yuichi Hoshino1, Takehiko Matsushita1, Ryosuke Kuroda1.
Abstract
Although electromyography is the routine diagnostic method for cubital tunnel syndrome (CuTS), imaging diagnosis by measuring cross-sectional area (CSA) with ultrasonography (US) has also been attempted in recent years. In this study, deep learning (DL), an artificial intelligence (AI) method, was used on US images, and its diagnostic performance for detecting CuTS was investigated. Elbow images of 30 healthy volunteers and 30 patients diagnosed with CuTS were used. Three thousand US images were prepared per each group to visualize the short axis of the ulnar nerve. Transfer learning was performed on 5000 randomly selected training images using three pre-trained models, and the remaining images were used for testing. The model was evaluated by analyzing a confusion matrix and the area under the receiver operating characteristic curve. Occlusion sensitivity and locally interpretable model-agnostic explanations were used to visualize the features deemed important by the AI. The highest score had an accuracy of 0.90, a precision of 0.86, a recall of 1.00, and an F-measure of 0.92. Visualization results show that the DL models focused on the epineurium of the ulnar nerve and the surrounding soft tissue. The proposed technique enables the accurate prediction of CuTS without the need to measure CSA.Entities:
Keywords: artificial intelligence; cubital tunnel syndrome; deep learning; ulnar nerve; ultrasonography
Year: 2022 PMID: 35328185 PMCID: PMC8947597 DOI: 10.3390/diagnostics12030632
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1(a) US probe placed on the medial epicondyle to visualize the ulnar nerve; (b) short-axis image of the ulnar nerve (red arrows) at the level of the medial epicondyle.
Figure 2Flowchart of the proposed framework.
Figure 3Images were randomly extracted by AI to be used as training data (light blue for control, orange for CuTS patients).
Figure 4Block diagram of ResNet-50.
Figure 5Block diagram of MobileNet_v2.
Figure 6Block diagram of EfficientNet.
Best parameters of the training model.
| ResNet-50 | MobileNet_v2 | EfficientNet | |
|---|---|---|---|
| Optimizer | Adam * | Adam | Adam |
| MiniBatchsize | 20 | 10 | 10 |
| Epochs | 1000 | 700 | 1000 |
| Learning rate | 0.0001 | 0.0001 | 0.0001 |
* Adam; adaptive moment estimation.
Figure 7(a) A confusion matrix is a table of four combinations based on predicted and actual values and the presence or absence of disease; (b) diagnostic accuracy from the learning model is calculated from the confusion matrix created using testing data.
Figure 8Area under the curve (AUC), based on the receiver operating characteristic (ROC) curve, was high for all learning models.
Diagnostic accuracy from the learning model was calculated from the confusion matrix created from testing data. The best accuracy score was 0.90 in ResNet-50 and MobileNet_v2; precision was 0.86 in ResNet-50 and MobileNet_v2; recall was 1.0 in all models; and the F-measure was 0.92 in ResNet-50 and MobileNet_v2.
| Network | Accuracy | Precision | Recall | Specificity | F-Measure |
|---|---|---|---|---|---|
| ResNet-50 | 0.904 | 0.859 | 1.00 | 0.774 | 0.924 |
| MobileNet_v2 | 0.904 | 0.859 | 0.998 | 0.776 | 0.923 |
| EfficientNet | 0.880 | 0.821 | 1.00 | 0.732 | 0.902 |
95% confidence interval.
Figure 9Confusion matrix of each learning model.
Figure 10Visualization of the region of interest using occlusion sensitivity and image LIMEs. Learning models focus on neural interior and perineural tissues. The red circle is a cross section of the ulnar nerve in the original image. AI focused on hyperechoic changes in the ulnar nerve epithelium and hypoechoic changes in the ulnar nerve interior and surrounding tissue.