| Literature DB >> 36212608 |
Chun Hon Lau1,2, Ken Hung-On Yu1,2, Tsz Fung Yip1,2, Luke Yik Fung Luk1,2, Abraham Ka Chung Wai3, Tin-Yan Sit4, Janet Yuen-Ha Wong4,5, Joshua Wing Kei Ho1,2.
Abstract
The management of chronic wounds in the elderly such as pressure injury (also known as bedsore or pressure ulcer) is increasingly important in an ageing population. Accurate classification of the stage of pressure injury is important for wound care planning. Nonetheless, the expertise required for staging is often not available in a residential care home setting. Artificial-intelligence (AI)-based computer vision techniques have opened up opportunities to harness the inbuilt camera in modern smartphones to support pressure injury staging by nursing home carers. In this paper, we summarise the recent development of smartphone or tablet-based applications for wound assessment. Furthermore, we present a new smartphone application (app) to perform real-time detection and staging classification of pressure injury wounds using a deep learning-based object detection system, YOLOv4. Based on our validation set of 144 photos, our app obtained an overall prediction accuracy of 63.2%. The per-class prediction specificity is generally high (85.1%-100%), but have variable sensitivity: 73.3% (stage 1 vs. others), 37% (stage 2 vs. others), 76.7 (stage 3 vs. others), 70% (stage 4 vs. others), and 55.6% (unstageable vs. others). Using another independent test set, 8 out of 10 images were predicted correctly by the YOLOv4 model. When deployed in a real-life setting with two different ambient brightness levels with three different Android phone models, the prediction accuracy of the 10 test images ranges from 80 to 90%, which highlight the importance of evaluation of mobile health (mHealth) application in a simulated real-life setting. This study details the development and evaluation process and demonstrates the feasibility of applying such a real-time staging app in wound care management.Entities:
Keywords: artificial intelligence; bedsore; deep learning; digital health; mHealth; object detection; pressure injury; wound assessment
Year: 2022 PMID: 36212608 PMCID: PMC9541137 DOI: 10.3389/fmedt.2022.905074
Source DB: PubMed Journal: Front Med Technol ISSN: 2673-3129
Summary of the developed wound assessment smartphone or tablet apps.
| Name | Academic publication | Main features | Potential limitations | Platform | Device | Availability |
|---|---|---|---|---|---|---|
| Garcia-Zapirain et al. | Garcia-Zapirain et al., 2018 ( | Pressure injury (PI) image decomposition and segmentation | Segmentation accuracy and processing time can be improved | Android | Tablet | Not found online or in any app store |
| Zahia et al. | Zahia et al., 2020 ( | Automatic PI image segmentation and size and depth measurement based on CNN | Requires Structure Sensor attached to iPad | iOS | Tablet | Not found online or in any app store |
| FootSnap | Yap et al., 2018 ( | Capture standardised images of plantar surface of diabetic feet (2018); localisation of wound in diabetic foot ulcer images (2019); cloud-based framework for storage (2022) | Lacking any of the other key features (e.g., segmentation / tissue / staging) | Android or iOS | Smartphone or Tablet | Not found online or in any app store |
| MOWA | Kositzke et al., 2018 ( | Tissue classification of wound | Method and accuracy unknown; segmentation done manually | Android or iOS | Smartphone or Tablet | Paid |
| Fraiwan et al. | Fraiwan et al., 2018 ( | Early detection of diabetic foot ulcers using thermal imaging | Accuracy unknown; external thermal camera required | Android | Smartphone | Not found online or in any app store |
| SmartWoundCare | Friesen et al., 2013 ( | Electronic documentation of chronic wounds | No mathematical or machine learning-based features | Android | Smartphone or Tablet | App is freely available |
| imitoWound | n/a | Wound documentation and measurement | Measurement requires paper-based calibration marker | Android or iOS | Smartphone or Tablet | App is freely available, but sensor for measurement is not |
| KroniKare | n/a | Capture 3D image of wound; dashboard for wound documentation; wound complication detection; measure wound size; AI-based classification of seven tissue types | Method and accuracy unknown; external device attached to smartphone required | Android or iOS | Smartphone | Availability upon request |
| CARES4WOUNDS | Chan et al., 2022 ( | Wound size and depth measurement; AI-based tissue classification; prediction of infection likelihood; output of a wound score; wound documentation and monitoring; wound dressing recommendation based on treatment objectives | Method and accuracy unknown beyond wound size measurement | iOS | Smartphone | Availability upon request |
| Orciuoli et al. | Orciuoli et al., 2020 ( | Wound size measurement and AI-based staging classification | A limited set of training images (62 total) and evaluation only reported for the training set | Android | Smartphone | Not found online or in any app store |
Figure 1Development of a smartphone app for wound assessment. (A) Collection and annotation of images of pressure injury wounds. (B) Application of our smartphone app to detect and classify PI. (C) A screenshot showing how a printed wound image is automatically detected and classified using the app.
Figure 2Evaluation of the classification results of the PI stages. Stage 1 (1), stage 2 (2), stage 3 (3), stage 4 (4), and unstageable (U) of the trained YOLOv4 model by computing the confusion matrix and corresponding Sensitivity (TP/TP + FN) and Specificity (TN/TN + FP) of (A) validation set of 144 photos, and (B) testing set of 10 photos. 1vO: Stage 1 vs. Others; 2vO: Stage 2 vs. Others; 3vO: Stage 3 vs. Others; 4vO: Stage 4 vs. Others; UvO: Unstageable vs. Others.
Figure 3Matthews correlation coefficient (MCC) of (A) validation set of 144 photos, and (B) testing set of 10 photos. 1vO: Stage 1 vs. Others; 2vO: Stage 2 vs. Others; 3vO: Stage 3 vs. Others; 4vO: Stage 4 vs. Others; UvO: Unstageable vs. Others.
Accuracy and Matthews correlation coefficient (MCC) of the training set of 1,278 pressure ulcer images by 10-fold cross-validation.
| Fold | Accuracy (%) | MCC (%) | ||||
|---|---|---|---|---|---|---|
| 1vO | 2vO | 3vO | 4vO | UvO | ||
| 1 | 51.6 | 45.6 | 70 | 83.7 | 24.7 | 45.7 |
| 2 | 69.0 | 56.9 | 23.6 | 83.2 | 91.3 | 95.3 |
| 3 | 76.2 | 51.0 | 37.4 | 83.1 | 100 | 100 |
| 4 | 81.0 | 61.4 | 86.8 | 90.5 | 92.9 | 95.3 |
| 5 | 81.0 | 85.2 | 46.4 | 87.7 | 97.6 | 90.6 |
| 6 | 69.0 | 76.4 | 20.7 | 36.9 | 86.9 | 93.4 |
| 7 | 74.6 | 64.2 | 33.1 | 80.5 | 93.4 | 85.6 |
| 8 | 71.4 | 51.4 | 58.4 | 91.3 | 88.0 | 72.5 |
| 9 | 75.5 | 52.8 | 27.1 | 93.5 | 98.1 | 93.0 |
| 10 | 83.7 | 70.7 | 63.7 | 80.7 | 92.2 | 95.3 |
| Average | 73.3 | 61.56 | 46.72 | 81.11 | 86.51 | 86.67 |
| SD (bootstrap) | 2.734 | 3.682 | 6.744 | 4.845 | 6.754 | 4.895 |
| 95% confidence interval | [68.39,79.41] | [53.71,68.58] | [33.53,59.56] | [74.02,91.84] | [77.55,100] | [78.96,97.60] |
Accuracy and Matthews correlation coefficient (MCC) of validation set of random label shuffling of training set of 1,278 pressure ulcer images, repeated ten times.
| Permutation | Accuracy (%) | MCC (%) | ||||
|---|---|---|---|---|---|---|
| 1vO | 2vO | 3vO | 4vO | UvO | ||
| 1 | 17.4 | −6.2 | −1.1 | 1.7 | 17.4 | 21.8 |
| 2 | 23.6 | −18.1 | 11.7 | 11.0 | 35.3 | 37.3 |
| 3 | 29.9 | −18.8 | 27.0 | 35.3 | 30.5 | 20.0 |
| 4 | 16.0 | −8.7 | 31.7 | −8.7 | 17.8 | 22.5 |
| 5 | 18.1 | 5.5 | 9.6 | −19.4 | 9.2 | 19.2 |
| 6 | 17.4 | −10.7 | −2.6 | 2.4 | 40.9 | 22.5 |
| 7 | 11.8 | −18.8 | 22.8 | −10.2 | 4.6 | −17.9 |
| 8 | 20.8 | 9.3 | 7.7 | 4.9 | 35.3 | −4.0 |
| 9 | 16.7 | −16.2 | 14.0 | 23.1 | 11.0 | 14.3 |
| 10 | 21.5 | 12.1 | −7.4 | 17.6 | 8.1 | 25.5 |
| Average | 19.32 | −7.06 | 11.34 | 5.77 | 21.01 | 16.12 |
| SD (bootstrap) | 1.470 | 3.636 | 3.719 | 4.987 | 3.874 | 4.619 |
| 95% confidence interval | [16.26,21.94] | [−14.18,0.05] | [4.06,18.71] | [−4.65,14.64] | [13.17,28.52] | [8.37,25.51] |
Figure 4Evaluation of accuracy of the implemented model in an Android smartphone app. The detection and classification of the printed wound images in the 10 photos in the test set using the smartphone app with three different Android phones at normal (125 lux) brightness level: (A) MIX2S, (B) Samsung Note 10+, and (C) Samsung S20. 1vO: Stage 1 vs. Others; 2vO: Stage 2 vs. Others; 3vO: Stage 3 vs. Others; 4vO: Stage 4 vs. Others; UvO: Unstageable vs. Others.
Per-class specficity and sensitivity of the implemented model in three different android phones at two different ambient brightness levels on the test set images.
| Sensitivity (%) | Specificity (%) | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Phone model (ambient brightness level) | 1vO | 2vO | 3vO | 4vO | UvO | 1vO | 2vO | 3vO | 4vO | UvO |
| MIX2S (normal) | – | – | 33.3 | 100 | 100 | – | 90 | 100 | 100 | 87.5 |
| MIX2S (dim) | – | – | 33.3 | 100 | 100 | – | 90 | 100 | 100 | 87.5 |
| Samsung Note 10+ (normal) | – | – | 66.7 | 100 | 100 | – | – | 100 | 100 | 87.5 |
| Samsung Note 10+ (dim) | – | – | 33.3 | 100 | 100 | – | 90 | 100 | 100 | 87.5 |
| Samsung S20 (normal) | – | – | 33.3 | 100 | 100 | – | 90 | 100 | 100 | 87.5 |
| Samsung S20 (dim) | – | – | 33.3 | 100 | 100 | – | 90 | 100 | 100 | 87.5 |
Normal is brightnes at 125 lux and dim is brightness at 58 lux.