Literature DB >> 30835207

Deep Learning Movement Intent Decoders Trained With Dataset Aggregation for Prosthetic Limb Control.

Henrique Dantas, David J Warren, Suzanne M Wendelken, Tyler S Davis, Gregory A Clark, V John Mathews.   

Abstract

SIGNIFICANCE: The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially improve the quality of life of the people with limb loss.
OBJECTIVE: This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals.
METHODS: The decoders are trained using the dataset aggregation (DAgger) algorithm, in which the training dataset is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods, namely polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolutional neural networks (CNN), and long short-term memory (LSTM) networks, were developed. The performances of the four decoding methods were evaluated using EMG datasets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same dataset, and long-term analyses, in which the training and testing were done in different datasets, were performed.
RESULTS: Short-term analyses of the decoders demonstrated that CNN and MLP decoders performed significantly better than KF and LSTM decoders, showing an improvement of up to 60% in the normalized mean-square decoding error in cross-validation tests. Long-term analyses indicated that the CNN, MLP, and LSTM decoders performed significantly better than a KF-based decoder at most analyzed cases of temporal separations (0-150 days) between the acquisition of the training and testing datasets.
CONCLUSION: The short-term and long-term performances of MLP- and CNN-based decoders trained with DAgger demonstrated their potential to provide more accurate and naturalistic control of prosthetic hands than alternate approaches.

Entities:  

Mesh:

Year:  2019        PMID: 30835207     DOI: 10.1109/TBME.2019.2901882

Source DB:  PubMed          Journal:  IEEE Trans Biomed Eng        ISSN: 0018-9294            Impact factor:   4.538


  2 in total

1.  Activities of daily living with bionic arm improved by combination training and latching filter in prosthesis control comparison.

Authors:  Michael D Paskett; Mark R Brinton; Taylor C Hansen; Jacob A George; Tyler S Davis; Christopher C Duncan; Gregory A Clark
Journal:  J Neuroeng Rehabil       Date:  2021-02-25       Impact factor: 4.262

2.  Motor Imagery Analysis from Extensive EEG Data Representations Using Convolutional Neural Networks.

Authors:  Vicente A Lomelin-Ibarra; Andres E Gutierrez-Rodriguez; Jose A Cantoral-Ceballos
Journal:  Sensors (Basel)       Date:  2022-08-15       Impact factor: 3.847

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.