| Literature DB >> 35958812 |
Abstract
The sensed data from infant sports and training programs are useful in analyzing their health conditions and forecasting any disorders or abnormalities. The sensed information is processed for providing errorless predictions for infant diseases/disorders, coupled with artificial intelligence and sophisticated healthcare technologies. The problem of noncongruent sensed data impacting the forecast occurs due to errors between consecutive training iterations. This problem is addressed using the deep learning (PEST-DL) proposed perceptible error segregation technique. The training process is halted between two consecutive iterations generating errors until a similarity verification based on infant history is performed. The similarity output determines the errors due to mismatching data observations, and therefore, the data augmentation is performed. The first perceptible error is mitigated by training the learning paradigm with all possible infant history data in the learning process. This prevents prediction lag and data omissions due to discrete availability. The learning is trained from the identified error with the precise detected disorder/abnormality data previously detected. Therefore, the first and consecutive training data segregate error instances from the actual training iterations. This improves the prediction accuracy and precision with controlled error and time complexity.Entities:
Mesh:
Year: 2022 PMID: 35958812 PMCID: PMC9357799 DOI: 10.1155/2022/4438251
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.246
Figure 1Proposed PEST-DL.
Figure 2Training and iteration required data analysis.
Figure 3Error identification from first and consecutive learning iterations.
Figure 4Dataset representation.
Figure 5Upper and lower values for S and s under varying iterations.
Variations 1 and 2 for varying iterations.
| Iteration | Variation 1 | Variation 2 | ||
|---|---|---|---|---|
|
|
|
|
| |
| 100 | 0.353 | 0.399 | 0.0424 | 0.02738 |
| 200 | 0.41 | 0.631 | 0.3598 | 0.2347 |
| 300 | 0.322 | 0.512 | 0.3158 | 0.01985 |
| 400 | 0.312 | 0.451 | 0.02897 | 0.0158 |
| 500 | 0.256 | 0.652 | 0.02658 | 0.00986 |
| 600 | 0.24 | 0.841 | 0.01708 | 0.00639 |
Figure 6T accuracy over the varying iterations.
Figure 7Accuracy analysis.
Figure 8Precision analysis.
Figure 9Error analysis.
Figure 10Time complexity analysis.
Comparative analysis result for varying S interval.
| Metrics | TL-CAR | MLRA+DT | AICF | PEST-DL | Inference |
|---|---|---|---|---|---|
| Accuracy (%) | 78.85 | 85.61 | 89.77 | 93.986 | 9.24% high |
| Precision | 0.817 | 0.864 | 0.917 | 0.9427 | 7.67% high |
| Error | 0.21 | 0.152 | 0.129 | 0.0815 | 8.22% less |
| Time (s) | 0.259 | 0.192 | 0.132 | 0.0814 | 9.7% less |
Comparative analysis result for varying mismatching rate.
| Metrics | TL-CAR | MLRA+DT | AICF | PEST-DL | Inference |
|---|---|---|---|---|---|
| Accuracy (%) | 73.16 | 79.73 | 82.64 | 86.06 | 7.55% high |
| Precision | 0.774 | 0.801 | 0.839 | 0.8913 | 8.66% high |
| Error | 0.203 | 0.159 | 0.117 | 0.0767 | 8.3% less |
| Time (s) | 0.249 | 0.192 | 0.126 | 0.0818 | 9.45% less |