| Literature DB >> 36164302 |
José Camilo Eraso Guerrero1, Elena Muñoz España1, Mariela Muñoz Añasco1, Jesús Emilio Pinto Lopera2.
Abstract
This article presents a dataset (CAUCAFall) with ten subjects, which simulates five types of falls and five types of activities of daily living (ADLs). Specifically, the data include forward falls, backward falls, lateral falls left, lateral falls right, and falls arising from sitting. The participants performed the following ADLs: walking, hopping, picking up an object, sitting, and kneeling. The dataset considers individuals of different ages, weights, heights, and dominant legs. The data were acquired using an RGB camera in a home environment. This environment was intentionally realistic and included uncontrolled features, such as occlusions, lighting changes (natural, artificial, and night), participants different clothing, movement in the background, different textures on the floor and in the room, and a variety in fall angles and different distances from the camera to the fall. The dataset consists of 10 folders, one for each subject, and each folder includes ten subfolders with the performed activities. Each folder contains the video of the action and all the images of that action. CAUCAFall is the only database that contains details of the lighting lux of the scenarios, the distances from the human fall to the camera and the angles of the different falls with reference to the camera. The dataset is also the only one that contains labels for each image. Frames including human falls recorded were labeled as ``fall'', and ADL activities were marked ``nofall". This dataset is useful for developing and evaluating modern fall recognition algorithms, such as those that apply feature extraction, convolutional neural networks with YOLOv3-v4 detectors, and camera location and resolution increase the performance of algorithms such as OPENPOSE. Thus, the dataset enables knowledge of the real progress of research in this area since existing datasets are used in strictly controlled environments. The authors intend to contribute a dataset with real-world housing environments characteristics.Entities:
Keywords: Activities of daily living; Fall detection; Feature extraction; Openpose; Uncontrolled environment; YOLO
Year: 2022 PMID: 36164302 PMCID: PMC9508401 DOI: 10.1016/j.dib.2022.108610
Source DB: PubMed Journal: Data Brief ISSN: 2352-3409
Fig. 1Fall recognition based on Feature Extraction.
Fig. 2Fall recognition based on OPENPOSE.
Fig. 3Fall recognition based on YOLO detectors.
comparison of datasets for human fall recognition.
| Dataset | Year | Camera | Light condition | Occlusion | Variety in fall angles | Different distances | File formats | Labels for YOLO | OpenPose performance | Lux | Angle details | Distance details | Availability (June,2022) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Multiple cameras fall dataset | 2010 | RGB | artificial | – | – | .avi | – | – | – | – | |||
| Le2i | 2012 | RGB | natural, artificial | – | .avi | – | – | – | – | – | – | ||
| SDUFall | 2014 | Kinect | natural, artificial | – | – | – | Depth videos .avi | – | – | – | – | – | – |
| EDF&OCCU | 2014 | Kinect | artificial | – | .txt | – | – | – | – | – | – | ||
| UR Fall Detection | 2014 | Kinect | artificial | – | – | avi. csv | – | – | – | – | |||
| FUKinect-Fall | 2016 | Kinect | – | – | Depth videos. csv | – | – | – | – | ||||
| Fall Detection Dataset | 2017 | RGB Kinect | natural, artificial | – | – | png. csv | – | – | – | – | |||
| UPFall | 2019 | RGB | natural, artificial | – | – | png. csv | – | – | – | – | |||
| CAUCAFall | 2022 | RGB | natural, artificial, no light | Jpeg .txt. avi |
Characteristics of the participants.
| Subject | Gender | Age | Weight (Kg) | Height (Meters) | Health Conditions | Dominant Leg | Outfit |
|---|---|---|---|---|---|---|---|
| 1 | Female | 27 | 56 | 1.65 | Healthy | Right | Gray jacket, blue pants, black shoes, hair tied. |
| 2 | Male | 34 | 70 | 1.73 | Healthy | Left | Red jersey, blue pants, white shoes. |
| 3 | Female | 31 | 58 | 1.60 | Healthy | Left | Brown jacket, gray pants, blue shoes, loose hair. |
| 4 | Male | 38 | 75 | 1.68 | Healthy | Right | Black jacket, blue pants, gray shoes, cap. |
| 5 | Male | 40 | 67 | 1.70 | Healthy | Right | Black jacket, brown pants, black shoes. |
| 6 | Male | 33 | 77 | 1.65 | Healthy | Right | Black jacket, white pants, brown shoes. |
| 7 | Female | 23 | 54 | 1.59 | Healthy | Right | Gray jersey, black pants, blue shoes, hair tied. |
| 8 | Female | 25 | 59 | 1.63 | Healthy | Right | Blue jersey, gray pants, brown shoes, hair tied. |
| 9 | Male | 37 | 79 | 1.74 | Healthy | Left | Yellow jersey, brown pants, brown shoes. |
| 10 | Female | 28 | 61 | 1.62 | healthy | Right | Green shirt, purple pants, black shoes, loose hair. |
Fig. 4Scenario dimensions (in meters).
Fig. 5Folders for each subject and different activities of the dataset.
Fig. 6Content of the different .txt files.
Fig. 7Camera-Fall distance.
Fig. 8Angle of fall.
| Subject | Computer Science |
| Specific subject area | Human fall recognition by computer vision in uncontrolled environments mainly focuses on YOLOv3-v4 detectors |
| Type of data | Video |
| How the data were acquired | The data were obtained with a single camera located in the upper corner of the stage, covering a large field of view to monitor the user's activity. The camera captured videos with changing lighting or without light. The data were stored in a DVR programmed to detect and record motion. The frame labels, which contain the information about the activities and segment each image between ``fall'' and ``nofall'', were manually created with a text editor. |
| Data format | Raw and analyzed |
| Description of data collection | The dataset was designed to recognize human falls in an uncontrolled home environment, with occlusions, changes in lighting (natural, artificial, and night), variety in participants’ clothing, movement in the background, different textures on the floor and in the room. The dataset is the only one that provides the lux of illumination of the scenarios, the distance from the human fall to the camera, and the angles of the different falls with reference to the camera, and provides participants of different ages, weights, heights, and even dominant legs. This dataset contributes to the real progress of research in recognizing falls. In addition, the proposed dataset is the only one that contains segmentation labels for each of its images. These labels serve to implement human fall recognition methods employing YOLO detectors. |
| Data source location | • Institution: Universidad del Cauca |
| Data accessibility | The datasets are publicly and freely available on mendeley data repository with doi: |