| Literature DB >> 33525084 |
Weibin Jiang1, Xuelin Ye2, Ruiqi Chen1,3, Feng Su3,4, Mengru Lin1, Yuhanxiao Ma5, Yanxiang Zhu3, Shizhen Huang1.
Abstract
Gesture recognition is critical in the field of Human-Computer Interaction, especially in healthcare, rehabilitation, sign language translation, etc. Conventionally, the gesture recognition data collected by the inertial measurement unit (IMU) sensors is relayed to the cloud or a remote device with higher computing power to train models. However, it is not convenient for remote follow-up treatment of movement rehabilitation training. In this paper, based on a field-programmable gate array (FPGA) accelerator and the Cortex-M0 IP core, we propose a wearable deep learning system that is capable of locally processing data on the end device. With a pre-stage processing module and serial-parallel hybrid method, the device is of low-power and low-latency at the micro control unit (MCU) level, however, it meets or exceeds the performance of single board computers (SBC). For example, its performance is more than twice as much of Cortex-A53 (which is usually used in Raspberry Pi). Moreover, a convolutional neural network (CNN) and a multilayer perceptron neural network (NN) is used in the recognition model to extract features and classify gestures, which helps achieve a high recognition accuracy at 97%. Finally, this paper offers a software-hardware co-design method that is worth referencing for the design of edge devices in other scenarios.Entities:
Keywords: accelerator ; convolutional neural network (CNN) ; field-programmable gate array (FPGA) ; gesture recognition ; inertial measurement unit (IMU) ; micro-control unit (MCU)
Mesh:
Year: 2020 PMID: 33525084 DOI: 10.3934/mbe.2021007
Source DB: PubMed Journal: Math Biosci Eng ISSN: 1547-1063 Impact factor: 2.080