| Literature DB >> 34066265 |
Jay-Shian Tan1, Behrouz Khabbaz Beheshti2, Tara Binnie1, Paul Davey1, J P Caneiro1, Peter Kent1, Anne Smith1, Peter O'Sullivan1, Amity Campbell1.
Abstract
Clinicians lack objective means for monitoring if their knee osteoarthritis patients are improving outside of the clinic (e.g., at home). Previous human activity recognition (HAR) models using wearable sensor data have only used data from healthy people and such models are typically imprecise for people who have medical conditions affecting movement. HAR models designed for people with knee osteoarthritis have classified rehabilitation exercises but not the clinically relevant activities of transitioning from a chair, negotiating stairs and walking, which are commonly monitored for improvement during therapy for this condition. Therefore, it is unknown if a HAR model trained on data from people who have knee osteoarthritis can be accurate in classifying these three clinically relevant activities. Therefore, we collected inertial measurement unit (IMU) data from 18 participants with knee osteoarthritis and trained convolutional neural network models to identify chair, stairs and walking activities, and phases. The model accuracy was 85% at the first level of classification (activity), 89-97% at the second (direction of movement) and 60-67% at the third level (phase). This study is the first proof-of-concept that an accurate HAR system can be developed using IMU data from people with knee osteoarthritis to classify activities and phases of activities.Entities:
Keywords: human activity recognition; inertial measurement units; knee osteoarthritis; machine learning; physical activity monitoring
Mesh:
Year: 2021 PMID: 34066265 PMCID: PMC8152007 DOI: 10.3390/s21103381
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Characteristics of participants.
| All Participants ( | |||
|---|---|---|---|
| Characteristics | Mean | SD | Range |
| Age (yr) | 66.2 | 8.7 | 49–82 |
| Female (%) | 53% | ||
| Weight (kg) | 80.5 | 15.9 | 44–113 |
| Height (m) | 1.7 | 0.1 | 157–186.5 |
| BMI (kg/m2) | 26.6 | 15.9 | 17.8–33.4 |
Levels of classification.
| Level 1 | Level 2 | Level 3 |
|---|---|---|
| Chair | Sit down | |
| Stand up | ||
| Stairs | Stairs ascending | Stance |
| Swing | ||
| Stairs descending | Stance | |
| Swing | ||
| Walking | Stance | |
| Swing |
Figure 1Placement of IMUs (purple) used for training the CNN models and Vicon marker (blue) placement for recording start and end times for each trial.
Figure 2Architecture of the proposed human activity recognition system.
Figure 3Visual representation of Level 1 activity ‘image’ patterns.
Figure 4Decision tree for three levels of activity classification—Level 1 Activity, Level 2 Direction, Level 3 Phase.
Figure 5Illustration of the segmented window sliding in 10 ms increments for each level of classification. Level 1—200 ms; Level 2—100 ms; Level 3—40 ms.
Accuracy of CNN models using leave-one-out cross-validation for each level of classification.
| Accuracy | |
|---|---|
| 1st Level Classification | |
| Chair, Stair, Walk | 85% |
| 2nd Level Classification | |
| Chair—stand/sit | 97% |
| Stair—up/down | 89% |
| 3rd Level Classification | |
| Stair up stance/swing | 67% |
| Stair down stance/swing | 60% |
| Walk stance/swing | 67% |
Figure 6Confusion matrices for classification of activities/phases per classification level. Green cells represent correct classification and arrows represent the classification pathway from activities to phases of activities.
Classification Level 1—Chair, Stair, Walk.
| Learning Rate: 0.0001 | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Convolution | Transition | Fully Connected | ||||||||
|
| CONV2D | MAXPOOLING2D | CONV2D | MAXPOOLING2D | Flatten | Dense | Activation | Dropout | Dense | Activation |
|
| 32 | 64 | ||||||||
|
| 3 × 3 | 2 × 2 | 3 × 3 | 2 × 2 | ||||||
|
| ReLu | Softmax | ||||||||
|
| 0.2 | |||||||||
|
| 100 | 3 | ||||||||
Classification Level 2—Stair Up/Down, Chair Sit/Stand.
| Learning Rate: Chair 0.0001, Stair 0.001 | |||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Convolution | Transition | Fully Connected | |||||||||||||
|
| CONV2D | Activation | MAX POOLING2D | CONV2D | Activation | MAX POOLING2D | CONV2D | Activation | MAX POOLING2D | Flatten | Dense | Activation | Dropout | Dense | Activation |
|
| 32 | 64 | 64 | ||||||||||||
|
| 3 × 3 | 2 × 2 | 3 × 3 | 2 × 2 | 3 × 3 | 2 × 2 | |||||||||
|
| ReLu | ReLu | ReLu | ReLu | Softmax | ||||||||||
|
| 100 | 2 | |||||||||||||
Classification Level 3—Walk Swing/Stance and Stair Up/Down Swing/Stance.
| Learning Rate: Walk/Stair Down 0.001, Stair Up 0.0001 | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Convolution | Transition | Fully Connected | ||||||||||||
|
| CONV2D | Activation | MAXPOOLING2D | CONV2D | Activation | CONV2D | Activation | MAXPOOLING2D | Flatten | Dense | Activation | Dropout | Dense | Activation |
|
| 32 | 64 | 64 | |||||||||||
|
| 3 × 3 | 2 × 2 | 3 × 3 | 3 × 3 | 2 × 2 | |||||||||
|
| ReLu | ReLu | ReLu | ReLu | Softmax | |||||||||
|
| 100 | 2 | ||||||||||||