| Literature DB >> 32380738 |
Padraig Davidson1, Peter Düking2, Christoph Zinner3, Billy Sperlich2, Andreas Hotho1.
Abstract
The rating of perceived exertion (RPE) is a subjective load marker and may assist in individualizing training prescription, particularly by adjusting running intensity. Unfortunately, RPE has shortcomings (e.g., underreporting) and cannot be monitored continuously and automatically throughout a training sessions. In this pilot study, we aimed to predict two classes of RPE ( ≤ 15 "Somewhat hard to hard" on Borg's 6-20 scale vs. RPE > 15 in runners by analyzing data recorded by a commercially-available smartwatch with machine learning algorithms. Twelve trained and untrained runners performed long-continuous runs at a constant self-selected pace to volitional exhaustion. Untrained runners reported their RPE each kilometer, whereas trained runners reported every five kilometers. The kinetics of heart rate, step cadence, and running velocity were recorded continuously ( 1 Hz ) with a commercially-available smartwatch (Polar V800). We trained different machine learning algorithms to estimate the two classes of RPE based on the time series sensor data derived from the smartwatch. Predictions were analyzed in different settings: accuracy overall and per runner type; i.e., accuracy for trained and untrained runners independently. We achieved top accuracies of 84 . 8 for the whole dataset, 81 . 82 for the trained runners, and 86 . 08 for the untrained runners. We predict two classes of RPE with high accuracy using machine learning and smartwatch data. This approach might aid in individualizing training prescriptions.Entities:
Keywords: artificial intelligence; endurance; exercise intensity; precision training; prediction; wearable
Mesh:
Year: 2020 PMID: 32380738 PMCID: PMC7248997 DOI: 10.3390/s20092637
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Dataset statistics.
| Statistic | Untrained | Trained | Overall |
|---|---|---|---|
| Number of RPE values | 79 | 33 | 112 |
| Number of “Somewhat hard to hard” | 48 | 18 | 66 |
| Number of “Hard to very hard” | 31 | 15 | 46 |
| Average number of rounds | 6.6 | 5.5 | 6.1 |
| Covered Distance (km) | 79 | 165 | 244 |
| Total Running Time (min) | 75.2 | 110.4 | |
| Time Between Inquiries (min) | 6.1 | 24.3 |
Figure 1Map view of the tracks for both study groups. (a) Map view of the 2 km track for the untrained runners., (b) Map view of the 5 km track for the trained runners. Map views were created using ©OpenStreetMap contributors.
Models and their parameters alongside their specific ranges used within the hyper-optimizing steps.
| Model | Parameter | Range |
|---|---|---|
| WEASEL+MUSE | MinF | [1, 15] |
| MaxF | [MinF, 20] | |
| MaxS | [4, 20] | |
| DTW+KNN | k | [1, 20] |
| weights | [uniform, distance] | |
| metric | [dtw, euclidean, sqeuclidean, cityblock] | |
| SVM | C |
|
| degree | [1, 20] | |
| decision function | [ovo, ovr] | |
| kernel | [gak, linear, poly, rbf, sigmoid] | |
| GRU | units | [1, 128] |
| recurrent dropout | (0, 1) | |
| linear dropout | (0, 1) | |
| CNN | filters | [1, 256] |
| kernel size | [1, 16] | |
| strides | [1, kernel size] | |
| activation | [relu, selu] |
Figure A1Visual representation of the training and evaluation procedure.
Summary of the achieved classification results. All values were obtained using the weighted (by support) averaging scheme. ♣ marks significant difference to the majority vote. ♠ marks significant difference to SFA. The best results are printed in bold.
| Model | Accuracy (%) | Precision (%) | Recall (%) | F1 (%) |
|---|---|---|---|---|
| Majority Vote | 58.9 | 34.7 | 58.9 | 43.7 |
| Borg’s classification | 50.9 | 73.1 | 50.9 | 43.4 |
| WEASEL+MUSE (SFA) | 73.2 ♣ | 73.2 | 73.2 | 73.2 |
| SVM | 79.5 ♣ | 79.4 | 79.5 | 79.4 |
| DTW + KNN | 83.0 | 83.1 | 83.0 | 83.1 |
| GRU | 82.1 ♣ | 82.1 | 82.1 | 82.1 |
| CNN |
| 85.1 | 84.8 |
|
Confusion matrix for the CNN classifiers. Each row represents the correct label (e.g., ground truth), whereas the columns display the predictions made by the network.
| Accuracy: | “Somewhat hard to hard” | “Hard to very hard” | Recall (%) |
|---|---|---|---|
| “Somewhat hard to hard” | 56 | 10 | 84.8 |
| “Hard to very hard” | 7 | 39 | 84.8 |
| Precision (%) | 88.9 | 79.6 |
Summary of the achieved classification results grouped by performance levels of the runners. The nomenclature remains the same as in Table 2. Labels were extracted from the overall evaluation procedure, and not each runner type trained separately. ♣ marks significant difference to the majority vote. ♠ marks significant difference to SFA. The best results are printed in bold.
| Untrained Runners | Trained Runners | |||
|---|---|---|---|---|
| Model | Accuracy (%) | Accuracy (%) | ||
| Majority Vote | 60.8 | 45.9 | 54.5 | 38.5 |
| Borg’s classification | 41.8 | 27.4 | 72.7 | 72.0 |
| WEASEL+MUSE (SFA) | 72.2 | 72.0 | 75.8 | 75.8 |
| SVM | 82.3 ♣ | 82.2 | 72.7 | 72.8 |
| DTW + KNN |
|
| 75.8 | 75.8 |
| GRU | 83.5 ♣ | 83.4 | 78.8 | 78.8 |
| CNN |
| 85.9 |
|
|
Confusion matrix for the CNN classifiers in the recreational runner setting. Each row represents the correct label (e.g., ground truth), whereas the columns display the predictions made by the network.
| Untrained Runners | |||
|---|---|---|---|
|
Accuracy: | “Somewhat hard to hard” | “Hard to very hard” | Recall (%) |
| “Somewhat hard to hard” | 44 | 4 | 91.7 |
| “Hard to very hard” | 7 | 24 | 77.4 |
| Precision (%) | 86.3 | 85.7 | |
Confusion matrix for the CNN classifiers in the trained runner setting. Each row represents the correct label (e.g., ground truth), whereas the columns display the predictions made by the network.
| Trained Runners | |||
|---|---|---|---|
| Accuracy: | “Somewhat hard to hard” | “Hard to very hard” | Recall (%) |
| “Somewhat hard to hard” | 12 | 6 | 66.7 |
| “Hard to very hard” | 0 | 15 | 100.0 |
| Precision (%) | 100.00 | 71.4 | |