| Literature DB >> 35800705 |
Xia Hua1, Lei Han1, Yang Jiang2.
Abstract
With the rapid development of the Internet, various electronic products based on computer vision play an increasingly important role in people's daily lives. As one of the important topics of computer vision, human action recognition has become the main research hotspot in this field in recent years. The human motion recognition algorithm based on the convolutional neural network can realize the automatic extraction and learning of human motion features and achieve good classification performance. However, deep convolutional neural networks usually have a large number of layers, a large number of parameters, and a large memory footprint, while embedded wearable devices have limited memory space. Based on the traditional cross-entropy error-based training mode, the parameters of all hidden layers must be kept in memory and cannot be released until the end of forward and reverse error propagation. As a result, the memory used to store the parameters of the hidden layer cannot be released and reused, and the memory utilization efficiency is low, which leads to the backhaul locking problem, limiting the deployment and execution of deep convolutional neural networks on wearable sensor devices. Based on this, this topic designs a local error convolutional neural network model for human motion recognition tasks. Compared with the traditional global error, the local error constructed in this paper can train the convolutional neural network layer by layer, and the parameters of each layer can be trained independently according to the local error and does not depend on the gradient propagation of adjacent upper and lower layers. As a result, the memory used to store all hidden layer parameters can be released in advance without waiting for the end of forward and backward propagation, avoiding the problem of backhaul locking, and improving the memory utilization of convolutional neural networks deployed on embedded wearable devices.Entities:
Mesh:
Year: 2022 PMID: 35800705 PMCID: PMC9256384 DOI: 10.1155/2022/6988525
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Partially trained model.
Figure 2The effect of joint weight parameters.
UCI HAR model parameter settings.
| Model parameters | Parameter setting |
|
| |
| Number of sliding convolution layers | 3 |
| Number of convolution kernels | 128, 256, 384 |
| Training period | 500 |
| Training batch | 200 |
| Dynamic learning rate | (4,1,0.9,0.7,0.5) × 10−3 |
Figure 3Error comparison between the UCI HAR local error model and the baseline model.
OPPORTUNITY model parameter settings.
| Model parameters | Parameter setting |
|
| |
| Number of sliding convolution layers | 3 |
| Number of convolution kernels | 128, 256, 384 |
| Training period | 500 |
| Training batch | 200 |
| Learning rate | 0.001 |
Figure 4Error comparison between the OPPORTUNITY local error model and the baseline model.
UniMib-SHAR model parameter settings.
| Model parameters | Parameter setting |
|
| |
| Number of sliding convolution layers | 3 |
| Number of convolution kernels | 128, 256, 384 |
| Training period | 500 |
| Training batch | 200 |
| Learning rate | (3,1.5,0.9) × 10−3 |
Figure 5Comparison of errors between the UniMib-SHAR local error model and the baseline model.
PAMAP2 model parameter settings.
| Model parameters | Parameter setting |
|
| |
| Number of sliding convolution layers | 3 |
| Number of convolution kernels | 128, 256, 384 |
| Training period | 500 |
| Training batch | 200 |
| Learning rate | 0.0005 |
Figure 6The error comparison of the PAMAP2 local error model and the baseline model.