| Literature DB >> 35891143 |
Wei Liu1, Xu Liu1, Yuan Hu2, Jie Shi1, Xinqiang Chen1, Jiansen Zhao1, Shengzheng Wang1, Qingsong Hu2.
Abstract
Aiming to avoid personal injury caused by the failure of timely medical assistance following a fall by seafarer members working on ships, research on the detection of seafarer's falls and timely warnings to safety officers can reduce the loss and severe consequences of falls to seafarers. To improve the detection accuracy and real-time performance of the seafarer fall detection algorithm, a seafarer fall detection algorithm based on BlazePose-LSTM is proposed. This algorithm can automatically extract the human body key point information from the video image obtained by the vision sensor, analyze its internal data correlation characteristics, and realize the process from RGB camera image processing to seafarer fall detection. This fall detection algorithm extracts the human body key point information through the optimized BlazePose human body key point information extraction network. In this section, a new method for human bounding-box acquisition is proposed. In this study, a head detector based on the Vitruvian theory was used to replace the pre-trained SSD body detector in the BlazePose preheating module. Simultaneously, an offset vector is proposed to update the bounding box obtained. This method can reduce the frequency of repeated use of the head detection module. The algorithm then uses the long short-term memory neural network to detect seafarer falls. After extracting fall and related behavior data from the URFall public data set and FDD public data set to enrich the self-made data set, the experimental results show that the algorithm can achieve 100% accuracy and 98.5% specificity for the seafarer's falling behavior, indicating that the algorithm has reasonable practicability and strong generalization ability. The detection frame rate can reach 29 fps on a CPU, which can meet the effect of real-time detection. The proposed method can be deployed on common vision sensors.Entities:
Keywords: BlazePose; deep learning; fall detection; long short-term memory neural network
Mesh:
Year: 2022 PMID: 35891143 PMCID: PMC9317772 DOI: 10.3390/s22145449
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Vitruvian theory of the human body.
Figure 2The network structure of BlazePose.
Figure 3Human body key point topology map of BlazePose.
Figure 4The detection process of BlazePose human key point information extraction network.
Figure 5The network detection process of the optimized human body key point information extraction based on BlazePose.
Figure 6The changed curve of the human body key point information. (a) Falling. (b) Walking.
Comparison of LSTM and RNN.
| Network | Comparison | ||
|---|---|---|---|
| Accuracy | Verify Image | Image Resolution | |
| LSTM | 89% | 500 | 1920 × 1080 |
| RNN | 36% | 500 | 1920 × 1080 |
| LSTM | 97% | 100 | 720 × 480 |
| RNN | 91% | 100 | 720 × 480 |
Figure 7Unit structure diagram of LSTM [34].
Self-developed dataset construction.
| Sample | Age | Height | Weight | Sex | Environment |
|---|---|---|---|---|---|
| Sample 1 | 24 | 168 cm | 62 Kg | Male | Field |
| Sample 2 | 25 | 162 cm | 47 Kg | Female | Field |
| Sample 3 | 39 | 176 cm | 74 Kg | Male | Cabin |
Dataset source statistics.
| Dataset Source | Data Quantity | Data Proportion | Data Acquisition |
|---|---|---|---|
| Self-made dataset | 3770 | 33.38% | RGB Camera |
| URFall public dataset | 2995 | 26.52% | Kinect Camera |
| FDD public dataset | 4527 | 40.09% | RGB Camera |
Experimental hardware configuration.
| Experimental Conditions | Parameters |
|---|---|
| CPU | Intel(R) Xeon(R) CPU E5-1650 v2 @ 3.50 GHz 3.50 GHz |
| GPU | GeForce GTX970 |
| Memory | 8 G |
| Hard disk | 1 T |
| System | Windows 10 Professional Edition |
| Language | python3.8 |
| Frame | TensorFlow1.15.5 |
| Software | Jupyter Notebook |
Figure 8Iterative process of model loss.
Figure 9Iterative process of model accuracy.
Figure 10Confusion matrix of training results.
Figure 11Detection results. (Note: ST, sitting; SD, standing; WK, walking; FL, falling).
Performance comparison between BlazePose–LSTM seafarer fall detection model and other models.
| Models | Accuracy | Specificity |
|---|---|---|
| OpenPose-YOLO | 95.43% | 96.8% |
| CNN | 96.97% | 95.44% |
| Stacked LSTM | 96.94% | 97.15% |
| BlazePose–LSTM | 100% | 98.5% |
Figure 12Detection results.