| Literature DB >> 26378540 |
Lei Yang1, Yanyun Ren2, Huosheng Hu3, Bo Tian4.
Abstract
In order to deal with the problem of projection occurring in fall detection with two-dimensional (2D) grey or color images, this paper proposed a robust fall detection method based on spatio-temporal context tracking over three-dimensional (3D) depth images that are captured by the Kinect sensor. In the pre-processing procedure, the parameters of the Single-Gauss-Model (SGM) are estimated and the coefficients of the floor plane equation are extracted from the background images. Once human subject appears in the scene, the silhouette is extracted by SGM and the foreground coefficient of ellipses is used to determine the head position. The dense spatio-temporal context (STC) algorithm is then applied to track the head position and the distance from the head to floor plane is calculated in every following frame of the depth image. When the distance is lower than an adaptive threshold, the centroid height of the human will be used as the second judgment criteria to decide whether a fall incident happened. Lastly, four groups of experiments with different falling directions are performed. Experimental results show that the proposed method can detect fall incidents that occurred in different orientations, and they only need a low computation complexity.Entities:
Keywords: Dense spatio-temporal-context; Single-Gauss-Model; depth images; fall detection
Mesh:
Year: 2015 PMID: 26378540 PMCID: PMC4610487 DOI: 10.3390/s150923004
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Flow chart of the proposed fall detection method.
Figure 2Extraction of human silhouette. (a) Background depth frame; (b) foreground depth frame; (c) extracted human silhouette.
Figure 3Centroid position of extracted human silhouette.
Figure 4Head extraction result from the human silhouette in Figure 2.
Figure 5Extracted results of floor plane from depth images. (a) Definite domain of floor plane; (b) estimated results of floor plane.
Figure 6Histogram of head and human in depth images. (a) Head depth image; (b) histogram of head in depth image; (c) human subject depth image; (d) histogram of human subject depth image.
Figure 7Falling down in different orientations. (a) Frames of color images of falling down in the anterior direction; (b) frames of depth images of falling down in the anterior direction; (c) frames of color images of falling down in the posterior direction; (d) frames of depth images of falling down in the posterior direction; (e) frames of color images of falling down in left direction; (f) frames of depth images of falling down in left direction; (g) frames of color images of falling down in right direction; (h) frames of depth images of falling down in the right direction.
Figure 8Trajectories of the distance from the head and centroid to the floor in different orientations. (a) Falling down from the anterior orientation; (b) falling down from the posterior orientation; (c) falling down from the left orientation; (d) falling down from the right orientation.
Time consumption of the proposed method.
| Fall Direction | Time for Total Frames (s) | Time for per Frame (ms) | Frame Number |
|---|---|---|---|
| anterior | 3.7241 | 22.8472 | 163 |
| posterior | 5.4960 | 22.3415 | 246 |
| left | 4.1075 | 23.0758 | 178 |
| right | 3.5418 | 23.9311 | 148 |