| Literature DB >> 26343665 |
Ruizhi Chen1, Tianxing Chu2, Keqiang Liu3,4, Jingbin Liu5, Yuwei Chen6.
Abstract
This paper introduces a framework for inferring human activities in mobile devices by computing spatial contexts, temporal contexts, spatiotemporal contexts, and user contexts. A spatial context is a significant location that is defined as a geofence, which can be a node associated with a circle, or a polygon; a temporal context contains time-related information that can be e.g., a local time tag, a time difference between geographical locations, or a timespan; a spatiotemporal context is defined as a dwelling length at a particular spatial context; and a user context includes user-related information that can be the user's mobility contexts, environmental contexts, psychological contexts or social contexts. Using the measurements of the built-in sensors and radio signals in mobile devices, we can snapshot a contextual tuple for every second including aforementioned contexts. Giving a contextual tuple, the framework evaluates the posteriori probability of each candidate activity in real-time using a Naïve Bayes classifier. A large dataset containing 710,436 contextual tuples has been recorded for one week from an experiment carried out at Texas A&M University Corpus Christi with three participants. The test results demonstrate that the multi-context solution significantly outperforms the spatial-context-only solution. A classification accuracy of 61.7% is achieved for the spatial-context-only solution, while 88.8% is achieved for the multi-context solution.Entities:
Keywords: human activity recognition; location awareness; mobile context computation; smartphone positioning
Year: 2015 PMID: 26343665 PMCID: PMC4610464 DOI: 10.3390/s150921219
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Definitions of the spatiotemporal context.
| Observable | Description |
|---|---|
| 1 | |
| 2 | |
| 3 |
Initial list of user contexts.
| Category | User Context | Observable Set of Each User context |
|---|---|---|
| Mobility | Motion pattern | |
| Environment | Light intensity | |
| Noise level | ||
| Temperature | ||
| Weather | ||
| Psychology | Level of fatigue | |
| Level of excitement | ||
| Level of nervousness | ||
| Level of depression | ||
| Social | Social contexts |
Figure 1Structure of the Naïve Bayes classifier used for classifying significant activities.
Figure 2An empirical probability distribution of the observable local time given the activity of “having a lunch”.
Figure 3Diagram of the procedure for inferring significant activities. The probability model can be either from user input (empirical model) or from training process using labeled activities.
Figure 4The trajectories of the indoor positioning solution with a Samsung Galaxy Note 3 smartphone and the ground truth generated by the NovAtel SPAN-IGM-S1 system.
Figure 5Horizontal positioning error statistics. (a) illustrates the horizontal positioning error in two different zoom levels and (b) shows the histogram, cumulative distribution curve and other related statistics.
The list of significant locations.
| Location-ID | Description |
|---|---|
| 1 | Office |
| 2 | Meeting room |
| 3 | Kitchen |
| 4 | Coffee Break Area |
| 5 | Library |
| 6 | Classroom |
| 7 | Bus Stop |
| 8 | Undefined Location |
Figure 6The significant locations used in the experiment. The number tags of these locations are defined in Table 3.
Figure 7Poses of the mobile devices during data logging.
Prior probability of each significant activity.
| Activity-ID | Description | Probability |
|---|---|---|
| 1 | Working | 0.3333 |
| 2 | Having a meeting | 0.0208 |
| 3 | Having a lunch | 0.0417 |
| 4 | Taking a coffee break | 0.0208 |
| 5 | Visiting library | 0.0208 |
| 6 | Taking a class | 0.0573 |
| 7 | Waiting for bus | 0.0053 |
| 8 | Other activities (undefined activities) | 0.5000 |
Definition of the user mobility contexts.
| ID | Description |
|---|---|
| 1 | static, speed <= 0.1 m/s |
| 2 | slow walking, 0.1 m/s < speed <= 0.7 m/s (less than one step per second) |
| 3 | walking, 0.7 m/s < speed <= 1.4 m/s (1–2 steps per second) |
| 4 | fast moving, speed > 1.4 m/s (more then 2 steps per second, or driving) |
Definitions of three different processing solutions.
| Solution# | Supervised/Unsupervised | Multi-context/Spatial-Context-Only | Probability Model |
|---|---|---|---|
| 1 | Unsupervised a | Multi-context | Empirical |
| 2 | Supervised a | Multi-context b | Trained with multiple contexts |
| 3 | Supervised | Spatial-context-only b | Trained with spatial context only |
a Comparison group 1; b Comparison group 2.
Classification accuracy of the supervised and unsupervised solutions for three participants with one-week datasets.
| Participant | Solutions | Number of Labeled Activities | |
|---|---|---|---|
| Unsupervised (Solution 1) | Supervised (Solution 2) | ||
| 1 | 66.5% | 88.9% | 237,085 |
| 2 | 50.3% | 87.9% | 211,834 |
| 3 | 57.2% | 89.6% | 261,517 |
| Mean | 58.0% | 88.8% | 236,812 |
Classification accuracy of the multi-context and spatial-context-only solutions for three participants with one-week datasets.
| Participant | Solutions | Number of Labeled Activities | |
|---|---|---|---|
| Spatial-Context-Only (Solution 3) | Multi-Context (Solution 2) | ||
| 1 | 64.0% | 88.9% | 237,085 |
| 2 | 65.7% | 87.9% | 211,834 |
| 3 | 55.4% | 89.6% | 261,517 |
| Mean | 61.7% | 88.8% | 236,812 |