| Literature DB >> 35173225 |
Łukasz Neumann1, Robert Nowak2, Jacek Stępień3, Ewelina Chmielewska2, Patryk Pankiewicz2, Radosław Solan3, Karina Jahnz-Różyk4.
Abstract
In this work we present an automated approach to allergy recognition based on neural networks. Allergic reaction classification is an important task in modern medicine. Currently it is done by humans, which has obvious drawbacks, such as subjectivity in the process. We propose an automated method to classify prick allergic reactions using correlated visible-spectrum and thermal images of a patient's forearm. We test our model on a real-life dataset of 100 patients (1584 separate allergen injections). Our solution yields good results-0.98 ROC AUC; 0.97 AP; 93.6% accuracy. Additionally, we present a method to segment separate allergen injection areas from the image of the patient's forearm (multiple injections per forearm). The proposed approach can possibly reduce the time of an examination, while taking into consideration more information than possible by human staff.Entities:
Mesh:
Substances:
Year: 2022 PMID: 35173225 PMCID: PMC8850609 DOI: 10.1038/s41598-022-06460-9
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Photos of patient’s forearm before (a, c, e) and after (b, d, f) allergen application. Colorbars in (e, f) show the temperature scale in Celsius degrees.
Figure 2Normalized intersection area between manual and predicted segments.
Output channels for the convolutional part of the network.
| Output channels | Pooling after convolution |
|---|---|
| 32 | Yes |
| 64 | Yes |
| 64 | |
| 128 | |
| 128 | Yes |
| 256 | |
| 256 | Yes |
| 256 | |
| 256 | Yes |
| 256 | |
| 256 | Yes |
All layers (including pooling) have kernels. All convolutional layers are followed with batch normalization and LeakyReLU activation.
Figure 4Loss and accuracy of the proposed model based on a single split in the cross-validation procedure.
Figure 3Sample segmentation results for problematic image. First row shows the result for a hairy forearm, while the second row shows the result for a different marker color (blue).
Validation results for different types of segmentation and input data.
| Input spectra | Segmentation | Validation | ROC AUC | PRC AP | Accuracy (%) | |
|---|---|---|---|---|---|---|
| Visible | Thermal | |||||
| ✓ | ✓ | U-Net | Leave-one-out | 0.975 | 0.956 | 93.50 |
| ✓ | ✓ | manual | 10-fold | 0.970 | 0.952 | 92.79 |
| ✓ | ✓ | U-Net | 10-fold | 0.978 | 0.961 | 93.24 |
| ✓ | ✗ | U-Net | 10-fold | 0.940 | 0.880 | 89.71 |
| ✗ | ✓ | U-Net | 10-fold | |||
The best values are in bold.
Figure 5Both figures calculated on the results of the cross-validation procedure for the proposed model, which was built only on thermal images (last row in Table 2).
Confusion matrix for the ten-fold cross-validation, classifier built only on thermal images (last row in Table 2).
| True diagnosis | Predicted diagnosis | Total | |
|---|---|---|---|
| Negative | Positive | ||
| Negative | 1042 | 61 | 1103 |
| Positive | 41 | 440 | 481 |
| 1083 | 501 | 1584 | |