| Literature DB >> 34248481 |
Diu K Luu1, Anh T Nguyen1,2, Ming Jiang3, Jian Xu1, Markus W Drealan1, Jonathan Cheng4,5, Edward W Keefer5, Qi Zhao3, Zhi Yang1,2.
Abstract
Previous literature shows that deep learning is an effective tool to decode the motor intent from neural signals obtained from different parts of the nervous system. However, deep neural networks are often computationally complex and not feasible to work in real-time. Here we investigate different approaches' advantages and disadvantages to enhance the deep learning-based motor decoding paradigm's efficiency and inform its future implementation in real-time. Our data are recorded from the amputee's residual peripheral nerves. While the primary analysis is offline, the nerve data is cut using a sliding window to create a "pseudo-online" dataset that resembles the conditions in a real-time paradigm. First, a comprehensive collection of feature extraction techniques is applied to reduce the input data dimensionality, which later helps substantially lower the motor decoder's complexity, making it feasible for translation to a real-time paradigm. Next, we investigate two different strategies for deploying deep learning models: a one-step (1S) approach when big input data are available and a two-step (2S) when input data are limited. This research predicts five individual finger movements and four combinations of the fingers. The 1S approach using a recurrent neural network (RNN) to concurrently predict all fingers' trajectories generally gives better prediction results than all the machine learning algorithms that do the same task. This result reaffirms that deep learning is more advantageous than classic machine learning methods for handling a large dataset. However, when training on a smaller input data set in the 2S approach, which includes a classification stage to identify active fingers before predicting their trajectories, machine learning techniques offer a simpler implementation while ensuring comparably good decoding outcomes to the deep learning ones. In the classification step, either machine learning or deep learning models achieve the accuracy and F1 score of 0.99. Thanks to the classification step, in the regression step, both types of models result in a comparable mean squared error (MSE) and variance accounted for (VAF) scores as those of the 1S approach. Our study outlines the trade-offs to inform the future implementation of real-time, low-latency, and high accuracy deep learning-based motor decoder for clinical applications.Entities:
Keywords: convolutional neural network; deep learning; feature extraction; motor decoding; neural decoder; neuroprosthesis; peripheral nerve interface; recurrent neural network
Year: 2021 PMID: 34248481 PMCID: PMC8260935 DOI: 10.3389/fnins.2021.667907
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1Photo of the (A) amputee and (B) the data collection software during a training session. The patient performs various hand movements repeatedly during the training session. Nerve data and ground-truth movements are collected by a computer and displayed in real-time on the monitor for comparison.
Figure 2Overview of the human experiment setup and data acquisition using the mirrored bilateral training. The patient has four FAST-LIFE microelectrode arrays implanted in the residual ulnar and median nerve (Overstreet, 2019). Peripheral nerve signals are acquired by two Scorpius neural interface devices (Nguyen and Xu, 2020). The ground-truth movements are obtained with a data glove.
Figure 3(A) Illustration of the sliding windows to cut neural data to create a pseudo-online dataset that resembles conditions in online decoding. (B) Illustration of the process to compute the feature data.
List of features, descriptions, and formula.
| F1 | Zero crossing (ZC) | The number of times the demeaned data change sign. | |
| F2 | Slope sign changes (SSC) | The number of times the differential data change sign. | |
| F3 | Waveform length (WL) | The summation of the absolute values of the differential data. | |
| F4 | Wilson amplitude (WA) | The number of times the change in the signal amplitudes of two consecutive samples exceeds the standard deviation. | |
| F5 | Mean absolute (MAB) | The average of the absolute values of the data. | |
| F6 | Mean square (MSQ) | The average of the square values of the data. | |
| F7 | Root mean square (RMS) | The root of MSQ or v-order 2. | |
| F8 | V-order 3 (V3) | The cubic root of the average of the cube of the data. | |
| F9 | Log detector (LD) | The exponential of the average of the log data. | |
| F10 | Difference absolute standard deviation (DABS) | Standard deviation of the absolute of the differential data. | |
| F11 | Maximum fractal length (MFL) | Equivalent to the log of DABS minus an offset that is equal to 1/2log( | |
| F12 | Myopulse percentage rate (MPR) | The number of times the absolute of the data exceeds the standard deviation. | |
| F13 | Mean absolute value slope (MAVS) | A modified version of MAV that is the difference between the MAV of the first half of a signal window and the second half. | |
| F14 | Weighted mean absolute (WMA) | A modified version of the MAB where the first and last 25% of a signal window is given less weight than the middle 50%. |
Figure 4An example of feature data in one trial which shows clear correlation with the finger's movement. A trial includes the finger's movement from resting to fully flexing and back to resting. Each color represents one of the 16 recording channels. The amplitude of each feature is normalized by a fixed value.
Figure 5Illustration of the (A) two-step (2S) and (B) one-step (1S) strategy for deploying deep learning models.
Figure 6Architecture of the deep learning models: (A) CNN for classification, (B) RNN for classification, (C) CNN for regression, (D) RNN for regression.
Comparison between this work and Nguyen and Xu (2020).
| Nguyen and Xu ( | 21 | 2 | 3 | 25,927,050 |
| This work (classification) | 1 | 0 | 3 | 1,465,749 |
| This work (regression) | 3 | 2 | 0 | 767,200 |
Classification performance.
| SVM | 0.999 | 0.767 | 0.916 | 0.895 | 0.932 | 0.999 | 0.808 | 0.911 | 0.771 | 0.677 |
| RF | 0.999 | 0.975 | 0.996 | 0.992 | 0.988 | 0.999 | 0.976 | 0.996 | 0.980 | 0.981 |
| MLP | 0.999 | 0.965 | 0.973 | 0.966 | 0.970 | 0.999 | 0.965 | 0.972 | 0.954 | 0.945 |
| CNN | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 | 0.999 |
| RNN | 0.999 | 0.948 | 0.988 | 0.970 | 0.994 | 0.999 | 0.957 | 0.987 | 0.950 | 0.994 |
Figure 7Regression performance in term of MSE (A,B) and VAF (C,D).