| Literature DB >> 32707928 |
Friedrich Niemann1, Christopher Reining1, Fernando Moya Rueda2, Nilah Ravi Nair1, Janine Anika Steffens1, Gernot A Fink2, Michael Ten Hompel1.
Abstract
Optimizations in logistics require recognition and analysis of human activities. The potential of sensor-based human activity recognition (HAR) in logistics is not yet well explored. Despite a significant increase in HAR datasets in the past twenty years, no available dataset depicts activities in logistics. This contribution presents the first freely accessible logistics-dataset. In the 'Innovationlab Hybrid Services in Logistics' at TU Dortmund University, two picking and one packing scenarios were recreated. Fourteen subjects were recorded individually when performing warehousing activities using Optical marker-based Motion Capture (OMoCap), inertial measurement units (IMUs), and an RGB camera. A total of 758 min of recordings were labeled by 12 annotators in 474 person-h. All the given data have been labeled and categorized into 8 activity classes and 19 binary coarse-semantic descriptions, also called attributes. The dataset is deployed for solving HAR using deep networks.Entities:
Keywords: attribute-based representation; dataset; human activity recognition; inertial measurement unit; logistics; motion capturing
Mesh:
Year: 2020 PMID: 32707928 PMCID: PMC7436169 DOI: 10.3390/s20154083
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Business process model of logistics Scenario 1—simplified order picking.
Figure 2Physical laboratory set-up of logistics Scenario 1—simplified order picking.
Figure 3Business process model of logistics Scenario 2 (Part 1)—real warehouse order picking.
Figure 4Business process model of logistics Scenario 2 (Part 2)—real warehouse order picking.
Figure 5Physical laboratory set-up of logistics Scenario 2—real warehouse order picking.
Figure 6Business process model of logistics Scenario 3—real warehouse packaging work station.
Figure 7Physical laboratory set-up of logistics Scenario 3—real warehouse packaging work station.
Figure 8Marker position on a Optical marker-based Motion Capture (OMoCap) suit.
Figure 9Positions of on-body devices (inertial measurement unit (IMU)) from set 1 (Texas Instruments Incorporated), set 2 (MbientLab), and set 3 (MotionMiners GmbH).
Subject: specifications and scenario assignment.
| ID | Sex | Age | Weight | Height | Handedness | OMoCap | IMU-set | Scenario 1 | Scenario 2 | Scenario 3 | ||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| [F/M] | [year] | [kg] | [cm] | [L/R] | [1] | [2] | [3] | [Number of Two-Minute Recordings] | ||||
| S01 | M | 28 | 78 | 175 | L | x | x | 29 | 0 | 0 | ||
| S02 | F | 24 | 62 | 163 | L | x | x | 30 | 0 | 0 | ||
| S03 | M | 59 | 71 | 171 | R | x | x | 27 | 0 | 0 | ||
| S04 | F | 53 | 64 | 165 | L | x | x | 29 | 0 | 0 | ||
| S05 | M | 28 | 79 | 185 | R | x | x | 26 | 0 | 0 | ||
| S06 | F | 22 | 52 | 163 | R | x | x | 30 | 0 | 0 | ||
| S07 | M | 23 | 65 | 177 | R | x | x | x | 2 | 13 | 14 | |
| S08 | F | 51 | 68 | 168 | R | x | x | x | 2 | 13 | 14 | |
| S09 | M | 35 | 100 | 172 | R | x | x | x | 2 | 14 | 13 | |
| S10 | M | 49 | 97 | 181 | R | x | x | x | 2 | 13 | 12 | |
| S11 | F | 47 | 66 | 175 | R | x | x | x | 2 | 12 | 0 | |
| S12 | F | 23 | 48 | 163 | R | x | x | x | 0 | 6 | 14 | |
| S13 | F | 25 | 54 | 163 | R | x | x | x | 2 | 14 | 14 | |
| S14 | M | 54 | 90 | 177 | R | x | x | x | 2 | 14 | 14 | |
|
|
|
|
| |||||||||
|
|
|
|
| |||||||||
|
|
|
|
| |||||||||
|
|
|
|
| |||||||||
Figure 10Subjects before the recordings.
Activity Classes and their semantic meaning.
| Activity Class | Description | |
|---|---|---|
|
| Standing | The subject is standing still on the ground or performs smaller steps. The subject can hold something in hands or stand hands-free. |
|
| Walking | The subject performs a gait cycle [ |
|
| Cart | The subject is walking (gait cycle) with the cart to a new position. This class does not include the handling of items on the cart like putting boxes or retrieving items. Likewise, the handling of the cart, e.g., turning it to better reach its handles, is not included. |
|
| Handling (upwards) | At least one hand reaches the height of the shoulder height (80% of a person’s total height [ |
|
| Handling (centred) | Handling is possible without bending over, kneeling, or lifting arms to shoulder joint height. |
|
| Handling (downwards) | The hands are below the height of the knees (lower than 30% of a person’s total height [ |
|
| Synchronization | Waving Motion where both hands are above the subject’s head by the beginning of each recording. |
|
| None | Excerpts that shall not be taken into account, because the class is not recognisable. Reasons are errors or gaps in the recording or a sudden cut by the end of a recording unit. |
Attributes and their semantic meaning.
| Attribute | Description | |
|---|---|---|
|
| ||
|
| Gait Cycle | The subject performs a gait cycle [ |
|
| Step | A single step where the feet leave the ground without a foot swing [ |
|
| Standing Still | Both feet stay on the ground. |
|
| ||
|
| Upwards | At least one hand reaches the height of the shoulder height (80% of a person’s total height [ |
|
| Centred | Handling is possible without bending over, kneeling or lifting arms to shoulder joint height. |
|
| Downwards | The hands are below the height of the knees (lower than 30% of a person’s total height [ |
|
| No Intentional Motion | Default value when no intentional motion is performed, e.g., when standing without doing anything, carrying a box or walking with a cart. This is because there is no intentional motion when performing these activities, only a steady stance. |
|
| Torso Rotation | Rotation in the transverse plane [ |
|
| ||
|
| Right Hand | The subject handles or holds something using the right hand. |
|
| Left Hand | The subject handles or holds something using the left hand. |
|
| No Hand | Hands are not used, neither for holding nor for handling something. |
|
| ||
|
| Bulky Unit | Items that the subject cannot put the hands around, e.g., boxes. |
|
| Handy Unit | Items that can be carried with a single hand or that the subjects can put their hands around, e.g., small articles, plastic bags. |
|
| Utility Auxiliary | Use of equipment, e.g., scissors, knives, bubble wrap, stamps, labels, scanners, packaging tape dispenser, adhesives etc. |
|
| Cart | Either bringing the the cart into proper position before taking it to a different location ( |
|
| Computer | Using mouse and keyboard. |
|
| No Item | Activities that do not include any item, e.g., when the subject fumbles for something when on the search for a specific item. |
|
| ||
|
| None | Equivalent to the |
Figure 11Semantic attributes.
Exemplary picking process broken down into process steps, activities, classes, and attributes.
| Attribute Representation | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| I Legs | II Upper Body | III Hand. | IV Item Pose | |||||||||||||||||
| Gait Cycle | Step | Standing Still | Upwards | Centered | Downwards | No Intentional Motion | Torso Rotation | Right Hand | Left Hand | No Hand | Bulky Unit | Handy Unit | Utility/Auxiliary | Cart | Computer | No Item | ||||
| Process Step | Act. | Class | A | B | C | A | B | C | D | E | A | B | C | A | B | C | D | E | F | |
|
| Bring cart to | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | |
| retrieval | 2 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | ||
| location | 3 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | ||
| 4 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | |||
|
| Scan Barcode | 5 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | |
| 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |||
| 7 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |||
| 8 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | |||
| 9 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |||
|
| Retrieve item | 10 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |
| and put in | 11 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | ||
| box | 12 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | ||
| 13 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |||
| 14 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | |||
|
| Confirm pick | 15 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | |
Figure A3Screenshot of the annotation and revision tool during the annotation.
Annotation effort of all annotators.
| ID | Total Time | No of Rec. | Time per Rec. |
|---|---|---|---|
| [hh:mm:ss] | [hh:mm:ss] | ||
| A01 | 55:12:19 | 52 | 01:14:02 |
| A02 | 73:22:04 | 45 | 01:55:21 |
| A03 | 56:30:39 | 54 | 01:14:13 |
| A04 | 34:39:08 | 26 | 01:28:00 |
| A05 | 84:18:37 | 30 | 02:48:37 |
| A06 | 39:24:16 | 64 | 00:39:46 |
| A07 | 28:40:57 | 25 | 01:10:35 |
| A08 | 32:56:40 | 27 | 01:15:24 |
| A09 | 33:28:45 | 27 | 01:14:24 |
| A10 | 10:14:21 | 12 | 00:51:12 |
| A11 | 23:03:16 | 14 | 01:38:48 |
| A12 | 02:16:00 | 3 | 01:45:03 |
|
|
| ||
|
|
| ||
|
|
|
|
Revision effort of all revisers.
| ID | Total Time | No of Rec. | Time per Rec. |
|---|---|---|---|
| [hh:mm:ss] | [hh:mm:ss] | ||
| Re01 | 13:44:00 | 88 | 00:09:22 |
| Re02 | 39:18:00 | 97 | 00:24:19 |
| Re03 | 28:37:00 | 91 | 00:18:52 |
| Re04 | 61:19:00 | 103 | 00:35:43 |
|
|
| ||
|
|
| ||
|
|
|
|
Annotation results divided by activity classes.
| Stand. | Walk. | Cart | Handling (upwards) | Handling (centred) | Handling (downwards) | Synchron. | None | |
|---|---|---|---|---|---|---|---|---|
|
| 974,611 | 994,880 | 1,185,788 | 754,807 | 3,901,899 | 673,655 | 158,655 | 403,737 |
|
| 1.71 | 3.72 | 6.46 | 2.72 | 4.39 | 2.74 | 2.16 | 7.10 |
|
| 10.77 | 11.00 | 13.11 | 8.34 | 43.12 | 7.45 | 1.75 | 4.46 |
|
| 28 | 7 | 3 | 45 | 72 | 47 | 1 | 1 |
Folder overview of the LARa dataset.
| Folder | Folder Size [MiB] | File Format | Recording Rate |
|---|---|---|---|
| OMoCap data | 33,774 | csv | 200 fps |
| IMU data - MbientLab | 1355.77 | csv | 100 Hz |
| RGB videos | 17,974.82 | mp4 | 30 fps |
| recording protocol | 2.58 | - | |
| annotation and revision tool | 2899.99 | py | - |
| class_network | 1449.55 | pt | - |
| attrib_network | 1449.55 | pt | - |
Figure 12The Temporal Convolutional Neural Network (tCNN) architecture contains four convolutional layers of size . According to the classification task, there are two types of last fully-connected layer: a softmax and a sigmoid.
Recall and precision of human activity recognition (HAR) on the LARa OMoCap dataset.
| Output | Metric | Performance | ||||||
|---|---|---|---|---|---|---|---|---|
| Stand. | Walk. | Cart | Hand. (up.) | Hand. (cent.) | Hand. (down.) | Sync. | ||
| Softmax | Recall[%] | 3.11 | 71.96 | 71.34 | 61.39 | 87.40 | 65.30 | 0.0 |
| Precision[%] | 73.00 | 45.29 | 81.35 | 57.10 | 70.85 | 80.72 | 0.0 | |
| Attributes | Recall[%] | 55.86. | 54.31. | 76.12 | 69.16 | 80.99 | 74.36 | 69.84 |
| Precision[%] | 24.22 | 60.59 | 92.13 | 79.08 | 82.94 | 74.63 | 89.31 | |
The over-all accuracy and weighted F1 of HAR on the LARa OMoCap dataset.
| Metric | Perform. | |
|---|---|---|
| Softmax | Attributes | |
| Acc[%] | 68.88 | 75.15 |
| wF1[%] | 64.43 | 73.62 |
Confusion matrix from the class predictions using tCNN with the softmax layer.
| Activities | Confusion Matrix | ||||||
|---|---|---|---|---|---|---|---|
| Stand. | Walk. | Cart | Hand. (up.) | Hand. (cent.) | Hand. (down.) | Sync. | |
| Stand. |
| 1807 | 211 | 134 | 7446 | 86 | 0 |
| Walk. | 42 |
| 461 | 35 | 918 | 15 | 0 |
| Cart | 1 | 928 |
| 4 | 2684 | 1 | 0 |
| Hand. (up.) | 0 | 92 | 152 |
| 2389 | 1 | 0 |
| Hand. (cent.) | 72 | 1681 | 1233 | 1717 |
| 572 | 0 |
| Hand. (down.) | 0 | 51 | 2 | 12 | 1437 |
| 0 |
| Sync. | 0 | 2 | 6 | 1245 | 178 | 0 |
|
Confusion matrix from the class predictions using the attribute predictions with tCNN and the nearest neighbor (NN) approach.
| Activities | Confusion Matrix | ||||||
|---|---|---|---|---|---|---|---|
| Stand. | Walk. | Cart | Hand. (up.) | Hand. (cent.) | Hand. (down.) | Sync. | |
| Stand. |
| 1492 | 506 | 268 | 4668 | 219 | 421 |
| Walk. | 633 |
| 691 | 47 | 649 | 41 | 7 |
| Cart | 44 | 298 |
| 20 | 629 | 2 | 0 |
| Hand. (up.) | 71 | 44 | 32 |
| 1218 | 20 | 42 |
| Hand. (cent.) | 1085 | 825 | 2403 | 1919 |
| 832 | 79 |
| Hand. (down.) | 73 | 15 | 16 | 9 | 982 |
| 3 |
| Sync. | 7 | 0 | 0 | 143 | 3 | 0 |
|
The accuracy, precision, and recall for the attributes on the test dataset.
| Metric | Attributes | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
| Accuracy | 89.3 | 76.9 | 84.5 | 93.9 | 81.7 | 96.4 | 82.5 | 96.9 | 92.0 | 79.1 | 90.3 | 76.2 | 71.7 | 85.2 | 91.3 | 98.3 | 90.2 | 100 | 100 |
| Precision | 79.0 | 82.8 | 83.4 | 80.4 | 85.6 | 76.7 | 86.3 | 0.0 | 92.8 | 81.6 | 91.9 | 48.7 | 60.4 | 74.3 | 88.8 | 98.8 | 95.4 | 0.0 | 0.0 |
| Recall | 82.1 | 70.3 | 92.0 | 73.1 | 83.9 | 68.7 | 72.0 | 0.0 | 98.5 | 92.4 | 36.2 | 37.2 | 63.0 | 26.8 | 74.2 | 49.5 | 41.0 | 0.0 | 0.0 |
Content criteria for filtering process.
| Stage | Content Criteria | Description |
|---|---|---|
|
| Human | Data must relate to human movements. |
|
| Sensor | Dataset must contain IMU or OMoCap data, or both. |
|
| Access | Dataset must be accessible online, downloadable and free of charge. |
|
| Physical Activity | Caspersen et al. [ |
Examined datasets per stage.
| Stage | Content Criteria | No of Datasets |
|---|---|---|
|
| Human | 173 |
|
| Sensor | 95 |
|
| Access | 70 |
|
| Physical Activity | 61 |
Categorization scheme.
| Root Category | ||
|---|---|---|
| Subcategory | Description | |
|
| ||
| Year | Year of publication. Updates are not taken into account. | |
| Dataset name | Name of the dataset and the acronym | |
| Ref. [Dataset] | Link and, if available, the Identifier (DOI) of the dataset | |
| Ref. [Paper] | Identifier or, if not available, link of the paper that describes the dataset, uses it or is generally given as a reference | |
|
| ||
| Work | Office work, general work, and physical work in production and logistic | |
| Exercises | Sport activity classes, e.g., basketball, yoga, boxing, golf, ice hockey, soccer | |
| Locomotion | e.g., walking, running, elevating, sitting down, going upstairs, and downstairs | |
| ADL | Activity classes of daily living, e.g., watching TV, shopping, cooking, eating, cleaning, dressing, driving car, personal grooming, interacting, talking, lying | |
| Fall Detection | Falling in different directions and from different heights | |
| Hand Gestures | Focus on the movement of hands, e.g., arm swiping, hand waving, and clapping | |
| Dance | e.g., jazz dance, hip-hop dance, Salsa, Tango | |
|
| ||
| Recording Time [min] | Total time of the recordings in minutes | |
| Data Size [MiB] | Data Size of the entire unzipped dataset in mebibytes, including e.g., RGB videos, pictures | |
| Format | Formats of data published in the repository | |
| No Subjects | Number of unique subjects | |
| No Act. classes | Number of individual activity classes | |
| List Act. classes | List of all individual activity classes | |
| Laboratory | The recordings were made in a laboratory environment | |
| Real Life | The recordings were made in a real environment, e.g., outdoors, on a sports field, or in a production facility | |
|
| ||
| OMoCap [Hz] | Optical marker-based Motion Capture with frames per second or hertz as a unit | |
| IMU [Hz] | Inertial measurement unit with hertz as a unit | |
| Other Sensors | Sensors except IMU and OMoCap | |
| Phone, Watch, Glasses | Use of sensors built in smartphone, smartwatch, or smart glasses | |
Overview of related public available human activity recognition datasets. The entries are sorted chronologically in ascending order according to the year of publication and alphabetically according to the name of the dataset. Missing informations are marked with “-”.
| General Information | Domain of the Act. class | Data Specification | Sensor | Attachment (Sensor/Marker) | ||||||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Year | Dataset Name | Ref. [Dataset] | Ref. [Paper] | Work | Exercises | Locomotion | ADL | Fall Detection | Hand Gestures | Dance | Recording Time [min] | Data Size [MiB] | Format | No Subjects | No Act. Classes | Laboratory | Real Life | OMoCap [fps/Hz] | IMU [Hz] | Other Sensors | Phone, Watch, Glasses | Hand/Wrist | Lower Arm | Upper Arm | Foot/Ankle | Lower Leg | Upper Leg | Hip | Shoulder | Belly/Waist | Thorax/Chest | Lower Back | Upper Back | Head |
| 2003 | Carnegie Mellon University Motion Capture Database (CMU Mocap) | [ | - | x | x | x | x | - | 18,673 | amc | 112 | 23 | x | 120 | x | x | x | x | x | x | x | x | x | x | x | x | x | |||||||
| 2004 | Leuven Action Database | [ | [ | x | x | x | - | 14 | text, avi, xls, pdf | 1 | 22 | x | 30 | RGB | x | x | x | x | x | x | x | x | x | x | ||||||||||
| 2007 | HDM05 | [ | [ | x | x | x | x | - | 3000.32 | c3d, amc, avi | 5 | 70 | x | 120 | RGB | x | x | x | x | x | x | x | x | x | x | x | x | |||||||
| 2008 | Wearable Action Recognition Database (WARD) | [ | [ | x | x | - | 41.66 | mat | 20 | 13 | x | 20 | x | x | x | x | x | |||||||||||||||||
| 2009 | BodyAttack Fitness | [ | [ | x | 15 | 5.64 | mat | 1 | 6 | x | 64 | x | x | |||||||||||||||||||||
| 2009 | Carnegie Mellon University Multimodal Activity (CMU-MMAC) Database | [ | [ | x | - | 60,897.03 | amc, txt, asf, wav, xls, avi | 43 | 29 | x | 120 | 125 | RGB, microphone, RFID, BodyMedia | x | x | x | x | x | x | x | x | x | x | x | x | x | ||||||||
| 2009 | HCI gestures | [ | [ | x | - | 12.9 | mat | 1 | 5 | x | 96 | x | x | |||||||||||||||||||||
| 2009 | HumanEva I | [ | [ | x | x | - | 13,824 | - | 4 | 6 | x | 120 | RGB, depth | x | x | x | x | x | x | x | x | x | x | x | ||||||||||
| 2009 | HumanEva II | [ | [ | x | - | 4,649 | - | 2 | 4 | x | 120 | RGB | x | x | x | x | x | x | x | x | x | x | x | |||||||||||
| 2010 | KIT Whole-Body Human Motion Database | [ | [ | x | x | x | x | x | - | 2,097,152 | xml, c3d, avi | 224 | 43 | x | 100 | RGB | x | x | x | x | x | x | x | x | x | x | x | x | ||||||
| 2010 | Localization Data for Person Activity Data Set | [ | [ | x | x | x | - | 20.5 | txt | 5 | 11 | x | 10 | x | x | x | ||||||||||||||||||
| 2011 | 3DLife/Huawei ACM MM Grand Challenge 2011 | [ | [ | x | - | - | svl, cvs | 15 | 5 | x | 160 | RGB, microphone, depth | x | x | x | |||||||||||||||||||
| 2011 | UCF-iPhone Data Set | [ | [ | x | x | - | 13.1 | csv | 9 | 9 | x | 60 | x | x | ||||||||||||||||||||
| 2011 | Vicon Physical Action Data Set | [ | [ | x | x | 33.33 | 144 | txt | 10 | 20 | x | 200 | x | x | x | x | x | |||||||||||||||||
| 2012 | Activity Prediction (WISDM) | [ | [ | x | - | 49.1 | txt | 29 | 6 | x | 20 | x | x | |||||||||||||||||||||
| 2012 | Human Activity Recognition Using Smartphones Data Set (UCI HAR) | [ | [ | x | x | 192 | 269 | txt | 30 | 6 | x | 50 | x | x | ||||||||||||||||||||
| 2012 | OPPORTUNITY Activity Recognition Data Set | [ | [ | x | x | x | 1500 | 859 | txt | 12 | 24 | x | 32 | x | x | x | x | x | x | x | x | x | ||||||||||||
| 2012 | PAMAP2 Physical Activity Monitoring Data Set | [ | [ | x | x | x | 600 | 1652.47 | txt | 9 | 18 | x | x | 100 | heart rate monitor | x | x | x | ||||||||||||||||
| 2012 | USC-SIPI Human Activity Dataset | [ | [ | x | - | 42.7 | mat | 14 | 12 | x | 100 | x | ||||||||||||||||||||||
| 2013 | Actitracker (WISDM) | [ | [ | x | x | - | 2588.92 | txt | 29 | 6 | x | 20 | x | x | ||||||||||||||||||||
| 2013 | Daily and Sports Activities Data Set | [ | [ | x | x | x | 760 | 402 | csv | 8 | 19 | x | 25 | x | x | x | ||||||||||||||||||
| 2013 | Daphnet Freezing of Gait Data Set | [ | [ | x | 500 | 86.2 | txt | 10 | 3 | x | 64 | RGB | x | x | x | |||||||||||||||||||
| 2013 | Hand Gesture | [ | [ | x | x | 70 | 47.6 | mat | 2 | 11 | x | 32 | x | x | x | |||||||||||||||||||
| 2013 | Physical Activity Recognition Dataset Using Smartphone Sensors | [ | [ | x | - | 63.1 | xlsx | 4 | 6 | x | 50 | x | x | x | x | x | ||||||||||||||||||
| 2013 | Teruel-Fall (tFall) | [ | [ | x | - | 65.5 | dat | 10 | 8 | x | 50 | x | x | |||||||||||||||||||||
| 2013 | Wearable Computing: Accelerometers’ Data Classification of Body Postures and Movements (PUC-Rio) | [ | [ | x | 480 | 13.6 | dat | 4 | 5 | x | 10 | x | x | x | x | |||||||||||||||||||
| 2014 | Activity Recognition from Single Chest-Mounted Accelerometer Data Set | [ | [ | x | x | 431 | 44.2 | csv | 15 | 7 | x | 52 | x | |||||||||||||||||||||
| 2014 | Realistic sensor displacement benchmark dataset (REALDISP) | [ | [ | x | x | 566.02 | 6717.43 | txt | 17 | 33 | x | 50 | x | x | x | x | x | |||||||||||||||||
| 2014 | Sensors activity dataset | [ | [ | x | 2800 | 308 | csv | 10 | 8 | x | 50 | x | x | x | x | x | ||||||||||||||||||
| 2014 | User Identification From Walking Activity Data Set | [ | [ | x | x | 431 | 4.18 | csv | 22 | 5 | x | 52 | RGB, microphone | x | x | |||||||||||||||||||
| 2015 | Complex Human Activities Dataset | [ | [ | x | x | 390 | 240 | csv | 10 | 13 | x | 50 | x | x | x | |||||||||||||||||||
| 2015 | Heterogeneity Activity Recognition Data Set (HHAR) | [ | [ | x | 270 | 3333.73 | csv | 9 | 6 | x | 200 | x | x | x | ||||||||||||||||||||
| 2015 | Human Activity Recognition with Inertial Sensors | [ | [ | x | x | x | 496 | 324 | mat | 19 | 13 | x | 10 | x | x | x | ||||||||||||||||||
| 2015 | HuMoD Database | [ | [ | x | x | 49.4 | 6044.27 | mat | 2 | 8 | x | 500 | EMG | x | x | x | x | x | x | x | x | x | ||||||||||||
| 2015 | Project Gravity | [ | [ | x | x | x | - | 27.6 | json | 3 | 19 | x | 25 | RGB | x | x | x | |||||||||||||||||
| 2015 | Skoda Mini Checkpoint | [ | [ | x | x | 180 | 80.3 | mat | 1 | 10 | x | 98 | x | x | x | |||||||||||||||||||
| 2015 | Smartphone-Based Recognition of Human Activities and Postural Transitions Data Set (SBHAR) | [ | [ | x | x | 300 | 240 | txt | 30 | 12 | x | 50 | RGB | x | x | |||||||||||||||||||
| 2015 | UTD Multimodal Human Action Dataset (UTD-MHAD) | [ | [ | x | x | x | - | 1316.15 | mat, avi | 8 | 27 | x | 50 | depth | x | x | ||||||||||||||||||
| 2016 | Activity Recognition system based on Multisensor data fusion (AReM) Data Set | [ | [ | x | x | 176 | 1.69 | csv | 1 | 6 | x | 70 | x | x | x | x | ||||||||||||||||||
| 2016 | Daily Log | [ | [ | x | x | x | 106,560 | 4815.97 | csv | 7 | 33 | x | x | GPS | x | x | x | |||||||||||||||||
| 2016 | ExtraSensory Dataset | [ | [ | x | x | x | x | 308,320 | 144,423.88 | dat, csv, mfcc | 60 | 51 | x | 40 | microphone | x | x | x | ||||||||||||||||
| 2016 | HDM12 Dance | [ | [ | x | 97 | 2,175.48 | asf, c3d | 22 | 20 | x | 128 | x | x | x | x | x | x | x | x | x | x | x | ||||||||||||
| 2016 | RealWorld | [ | [ | x | x | 1065 | 3891.92 | csv | 15 | 8 | x | 50 | GPS, magnetic field, microphone, RGB, light | x | x | x | x | x | x | x | x | |||||||||||||
| 2016 | Smartphone Dataset for Human Activity Recognition in Ambient Assisted Living | [ | [ | x | x | 94.79 | 46.5 | txt | 30 | 6 | x | 50 | x | x | ||||||||||||||||||||
| 2016 | UMAFall: Fall Detection Dataset | [ | [ | x | x | x | - | 359 | csv | 19 | 14 | x | 200 | x | x | x | x | x | x | |||||||||||||||
| 2017 | An Open Dataset for Human Activity Analysis using Smart Devices | [ | [ | x | x | x | - | 433 | csv | 1 | 16 | x | x | x | x | x | x | |||||||||||||||||
| 2017 | IMU Dataset for Motion and Device Mode Classification | [ | [ | x | - | 2835.21 | mat | 8 | 3 | x | 100 | x | x | x | x | x | x | x | x | x | x | x | ||||||||||||
| 2017 | Martial Arts, Dancing and Sports (MADS) Dataset | [ | [ | x | x | - | 24,234.96 | mov, zip | 5 | 5 | x | 60 | RGB | x | x | x | x | x | x | x | x | x | x | x | x | |||||||||
| 2017 | Physical Rehabilitation Movements Data Set (UI-PRMD) | [ | [ | x | x | - | 4700.17 | txt | 10 | 10 | x | 100 | depth | x | x | x | x | x | x | x | x | x | x | x | ||||||||||
| 2017 | SisFall | [ | [ | x | x | x | 1849.33 | 1627.67 | txt | 38 | 34 | x | 200 | x | ||||||||||||||||||||
| 2017 | TotalCapture Dataset | [ | [ | x | x | - | - | - | 5 | 5 | x | x | 60 | RGB | x | x | x | x | x | x | x | x | x | x | x | x | ||||||||
| 2017 | UniMiB SHAR | [ | [ | x | x | x | - | 255 | mat | 30 | 17 | x | 50 | microphone | x | x | ||||||||||||||||||
| 2018 | Fall-UP Dataset (Human Activity Recognition) | [ | [ | x | x | x | 165.00 | 78 | csv | 17 | 11 | x | 100 | infrared, RGB | x | x | x | x | x | x | x | |||||||||||||
| 2018 | First-Person View | [ | - | x | x | - | 1046.86 | mp4, csv | 2 | 7 | x | x | RGB | x | x | x | x | |||||||||||||||||
| 2018 | HAD-AW | [ | [ | x | x | x | x | x | - | 325 | xlsx | 16 | 31 | x | 50 | x | x | |||||||||||||||||
| 2018 | HuGaDB | [ | [ | x | 600 | 401 | txt | 18 | 12 | x | x | EMG | x | x | x | |||||||||||||||||||
| 2018 | Oxford Inertial Odometry Dataset (OxIOD) | [ | [ | x | 883.2 | 2751.73 | csv | 4 | 2 | x | x | 250 | 100 | x | x | x | ||||||||||||||||||
| 2018 | Simulated Falls and Daily Living Activities Data Set | [ | [ | x | x | x | 630 | 3972.06 | txt | 17 | 36 | x | 25 | x | x | x | x | x | x | |||||||||||||||
| 2018 | UMONS-TAICHI | [ | [ | x | - | 28,242.47 | txt, c3d, tsv | 12 | 13 | x | 179 | RGB, depth | x | x | x | x | x | x | x | x | x | x | x | |||||||||||
| 2019 | AndyData-lab-onePerson | [ | [ | x | x | x | 300 | 99,803.46 | mvn, mvnx, c3d, bvh, csv, qtm, mp4 | 13 | 6 | x | 120 | 240 | RGB, pressure sensor handglove | x | x | x | x | x | x | x | x | x | x | x | x | |||||||
| 2019 | PPG-DaLiA | [ | [ | x | x | x | 2,190 | 23,016.74 | pkl, csv | 15 | 8 | x | 700 | PPG, ECG | x | x | ||||||||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| ||||||||||
|
|
|
|
|
|
|
| ||||||||||||||||||||||||||||
|
|
|
|
|
|
|
| ||||||||||||||||||||||||||||
|
|
|
|
|
|
|
| ||||||||||||||||||||||||||||
|
| [ |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |||||||||