Literature DB >> 36004856

A Decoding Prediction Model of Flexion and Extension of Left and Right Feet from Electroencephalogram.

Abeer Abdulaziz AlArfaj1, Hanan A Hosni Mahmoud1, Alaaeldin M Hafez2.   

Abstract

Detection of limb motor functions utilizing brain signals is a significant technique in the brain signal gain model (BSM) that can be effectively employed in various biomedical applications. Our research presents a novel technique for prediction of feet motor functions by applying a deep learning model with cascading transfer learning technique to use the electroencephalogram (EEG) in the training stage. Our research deduces the electroencephalogram data (EEG) of stroke incidence to propose functioning high-tech interfaces for predicting left and right foot motor functions. This paper presents a transfer learning with several source input domains to serve a target domain with small input size. Transfer learning can reduce the learning curve effectively. The correctness of the presented model is evaluated by the abilities of motor functions in the detection of left and right feet. Extensive experiments were performed and proved that a higher accuracy was reached by the introduced BSM-EEG neural network with transfer learning. The prediction of the model accomplished 97.5% with less CPU time. These accurate results confirm that the BSM-EEG neural model has the ability to predict motor functions for brain-injured stroke therapy.

Entities:  

Keywords:  machine learning; motor function therapy; transfer learning

Year:  2022        PMID: 36004856      PMCID: PMC9404826          DOI: 10.3390/bs12080285

Source DB:  PubMed          Journal:  Behav Sci (Basel)        ISSN: 2076-328X


1. Introduction

Most patients with stroke incidence have motor function deficiency in both left and right feet [1], causing a substantial loss of motor occupation and daily activities [1,2,3]. Stroke therapy tends to stimulate UE motor recovery and restore motor function of both feet. A main rehabilitation process is the understanding of the EEG signals to supply a non-invasive solution for the brain signal gain model (BSM), employed in all EEG signal models. BSM systems include the following steps: EEG reading, image processing, and controller [1,2,3,4]. Motor function of left and right feet target the objects in the surroundings [4,5]. The brain signal gain model (BSM) is a learning model that can capture EEG signals and convert them into motion function. BSMs are extensively found in brain-injured therapy cases. The brain signals lead to a non-intrusive answer for the BSM. BSM models the steady state of the EEG signals for motor functions feedback systems [5,6]. the images contain disparities of muscles motivated by the brain signals [7,8,9,10,11]. In our research, EEGs are captured from stroke patient cases with motor function disabilities for stroke patients. BSM systems can stimulate the motor function-lacking body part to regain the nerves of the injured parts (left and right feet in our case). Deep learning models are usually applied for BSM schemes, spatial feature selection, classification, and recognition models [10,11,12,13,14]. The researchers in [15] presented a support vector machine to classify motor signals from images. The researchers in [16] introduced the score prediction technique, and conquered accurate classification [17]. EEG signals are investigated and used in a deep learning prediction model. This model outperformed previous models especially on large datasets. Deep learning models can label properties without geometrical engineering. This defines the neural structure systems as feature selection for EEG brain signals using BSM. Current models operated deep learning systems to capture deep features. The researchers in [15] introduced a neural network with an auto-encoder with higher classification precision than prior models on the BSM-2b sets. Researchers in [16] presented a belief deep learning prediction model using the Boltzmann model. Researchers in [17] presented the envelope map of EEG signals by employing the Hilbert technique and constructed a motor imagery-based BSM prediction deep model. They employed the model to the BSM EEG-2 dataset and exhausted the most progressive prediction accuracy stated. Researchers in [16] utilized a deep learning model depiction of multiple channel EEG signal to enhance the accuracy. The researchers in [17] built 3D feature vectors of the EEG data with a parallel CNN model. The model in [18] attained high accuracy. Deep learning techniques use EEG feature mining and achieve higher precision [18,19,20,21]. However, feature mining becomes difficult due to the medical state of stroke incident cases especially for the EEG, since capturing is hard with an effect on large-size databases. The usage of these systems for motor function spatial studies in stroke cases is limited. Our model incorporates transfer learning methodologies to efficiently reduce the size of the required training set [22,23,24,25]. Features used by transfer learning utilize incident similarities and by sub-parameter inheritance [25,26,27,28]. These parameters can be reused in a reduced dataset and can increase the effectiveness of the EEGs feature learning models [28,29,30,31,32,33]. Our research contributions are summarized as depicted below: Designing a deep learning neural system with a number of additional modules and cascading transfer learning stages. Improving the precision of the BSM system for the prediction of motor functions for stroke patients from their EEG signals. Proposing an extension to the Dense-Net using parameter tuning and transfer learning (BSM-EEG). Confirming the accuracy of the proposed model by performing a comparison to similar published models. The remainder of this paper is organized as follows. The dataset description is presented in Section 2. The proposed model with transfer learning is presented in Section 3. The experiments and performance comparison are depicted in Section 4. Section 5 concludes the work.

2. Materials and Methods

2.1. Data Description

The dataset contains the EEG data of 100 cases (an average of 20 different motor function for each case). The EEG signal per case continues for 3.5 s as depicted in Figure 1. The public dataset has records of the EEG signals of the patient while he is doing different motor functions of his left and right feet. Then, we let him relax for two seconds. The public dataset can be accessed by registration from https://www.bbci.de/competition/iv/#dataset2a accessed on 12 May 2022 and https://www.bbci.de/competition/iii/#data_set_iiia accessed on 15 May 2022.
Figure 1

EEG signal recording versus time in seconds.

These data items are recorded and labeled in a public dataset that we utilized for our experiments [15]. The motor functions of the foot are depicted in Figure 2. Figure 2a displays the flexion and extension of the foot in the ranges of 0–30 and 0–50 respectively. Figure 2b displays the flexion and extension of the foot in vertical position. Figure 2c displays the pronation and supination of the foot in the ranges of 0–30 and 0–60 respectively The statistics of the motor function of left and right feet data are shown in Table 1. These data are extracted from the public dataset in [15]. The recorded data include the foot with all the reflexes. The dataset statistics are depicted in Table 1 and Table 2.
Figure 2

The motor functions of the foot (a–c).

Table 1

The statistics of the motor function of left and right feet data.

Motor FunctionMeanStandard DeviationMinimumMaximum
Right foot flexion 18.9° 3.4°030°
Left foot flexion20.5° 2.68°030°
Right foot extension40.7°5.67°050°
Left foot extension42.7°6.3° 050°
Right foot pronation25.962.87030°
Left foot pronation26.713.63030°
Right foot supination51.715.73060°
Left foot supination48.964.87060°
Table 2

Dataset statistics (total samples of EEG signals: 2000 from 271 cases).

Foot Movement Associated with the EEGCount
Right foot flexion 222
Left foot flexion200
Right foot extension208
Left foot extension300
Right foot pronation250
Left foot pronation200
Right foot supination300
Left foot supination320

2.2. Preprocessing Task

EEG data were processed through Matlab with the toolboxes BraSig 2.3.0 and EEGProc 13.1.0, Matlab Inc. (Asheboro, NC, USA). The four preprocessing steps were as follows: Removal of noisy channels, we erased the channel AFz as it is impacted by eye blinks. Removal of static outliers using ICA using EEG signal with frequency 0.5–60 Hz to capture the outliers. We erased static outliers by applying the zero-phase band-pass filter using independent component analysis. We concentrated the EEG channels with principal component analysis and kept only components that capture 98% of the variations of the data. Detection of attempts with transitory artefacts (EEG signal from 0.5–60 Hz). We distinguished transitory artefacts using EEGProc and signaled attempts for denial with values more than −90 μV or less than 90 μV. Removal of static and transitory artefacts (computed from step 2 and 3) from the EEG signal in the range of 0.5 Hz to 5 Hz [34,35,36,37]. A total of 120 attempts were recorded for each patient as depicted from the data. We used K-fold validation dividing the data into 70%, 15%, and 15% for training, testing, and validation respectively.

3. Deep Learning Phase: The Proposed BSM-EEG Model

BSM-EEG is a deep learning model with cascading transfer learning model for handling EEG signals through training on EEG signals of healthy cases and the motor functions associated with them. The prediction phase is to predict the motor function from the EEG of the brain-injured cases.

3.1. Methodology

Our methodology aims to achieve a learning transfer model from other deep learning models that are trained on other motor functions for brain-injured cases, namely as upper limb movements (source domain 1) [7] and knee movements (source domain 2) [5]. Each source domain contains an average of 30,000 different labeled motor function EEG. To do so, we employed several input domains to get the suitable learning transfer models. Figure 3 displays the phases to accomplish this objective. We can have several input domains. For each domain , an optimal deep neural network was attained via Bayesian procedure. The optimization module output the parts of which was utilized to train the final deep learning model. The training data of the transfer learning model were chosen due to its prediction accuracy over the labeled target domain. The flow diagram of the proposed model is depicted in Figure 3.
Figure 3

Methodology to obtain deep learning models for transfer learning (the flow diagram of the BSM-EEG model).

The presented model comprises four stages: Transfer training in input domain utilizing upper limb labeled, motor function labeled EEG signals. A deep neural network was trained to learn the EEG signals for upper limb motor functions. The structure of this deep learning network was optimized to realize higher accuracy. Unsupervised training phase on the same dataset utilizing non-labeled data items from and from other data items not included in . We adjusted the pre-trained deep learning model from first phase by utilizing the same neural weights. Fine-tuning in the target input domain using 271 labeled EEGs with their desired lower limb motor functions.

3.2. Architecture

To choose the suitable deep learning model with the correct weights is the Bayesian selection process [21]. Random Bayesian selection of the convolutional weight space leads to higher accuracy. In a Bayesian optimization model, the parameters of the deep learning model are computed as the optimization of an objective function. The objective function’s goal is to optimize the loss function of the deep learning model by adjusting the selection space. In this paper, we present several Bayesian procedures to obtain the deep learning model that achieves better performance on the source input domains. The training process phases is depicted as follows. The first phase was to train an initial model with arbitrary preliminary parameters and optimize a loss function for the source input domains in each source domain. For each source , an optimized deep learning (DL) neural network was achieved via Bayesian optimizer. During this process, the source input domain was divided into a training subset and validation subset. The DL model was verified based on the transfer learning performance using the target input domains. This Bayesian optimizer was applied on all source domain data. The model with the highest performance was chosen by computing the performance metrics on transfer learning functions using a function. The last step was to validate the usefulness of prior models for transfer learning optimization by utilizing all target datasets by optimizing the loss function () of the Bayesian optimizer. where is a transfer learning operator. is a deep learning model and is trained with only a single source input domain and its weights are transferred to the other source domains by fine-tuning the fully connected layers in . The loss function is calculated as a weighted (w) accuracy (acc) average and the average loss in both the learning and validation process and is computed as follows: The final step of this stage was a set of DL models equal to the count of source domains. The architecture of transfer learning training and prediction from actual labeled clinical data is depicted in Figure 4.
Figure 4

(a) The architecture of the transfer learning training; (b) the architecture of prediction from actual labeled clinical data.

4. Results and the Prediction Performance

4.1. Training

The proposed model training was done on a Sun station CPU X6-3320 V2@ 3.60 GHz* 16 with 64 bits Linux operating system as depicted in Table 3. The deep learning model was implemented in Python 3.6.0. The method of the training was to modify the filter weights to ensure that the classified result is near to the labeled class. The utilized dataset was partitioned into three partitions. The first partition was the training subset and it included 70% of the dataset. The second partition was the validation subset and it included 15% of the dataset. The third partition was the validation subset and it included 15% of the dataset for testing the efficiency of the model. Adam optimizer was employed for fine-tuning the neural weights to minimize the loss. Table 4 depicts the hyperparameters utilized for training.
Table 3

Environment.

Hardware
ProcessorRAM
Sun station CPU X6-3320 V2@ 3.60 GHz* 1664 GB
Software
Operating systemSimulation environment
LinuxPython 3.4 and Mat lab
Table 4

Hyperparameters utilized for training.

StageLayerHyperparameter Value
First ConvolutionFilters128
Kernel size5
Strides3
Average pooling8
Second ConvolutionFilters256
Kernel size4
Average pooling4
Third ConvolutionFilters512
Kernel size2
Max pooling2
Training ParametersLearning rate0.2
Epochs80
Batch size26
OptimizerAdam

4.2. Experiment Setting

The experimental setting included determining the number of hidden layers of the DL model and the number of neurons in each layer, number of epochs, and learning rate. To define the construction of the neural structure, hidden layers and the neurons in the different layers had to be defined. The results of various hidden layer numbers and neuron counts are depicted in Table 5 and displayed in Figure 5. The count of iterations was 1900.
Table 5

Prediction accuracy of various counts of neurons in convolutional layers.

Neuron Counts89101112131415
Layer 10.92560.93590.93630.92820.94610.97090.97260.9655
Layer 20.96650.97040.97290.93890.95090.94490.93660.9336
Layer 30.94490.97270.96520.94660.95290.95060.94370.9363
Layer 40.94060.93830.93720.94490.92770.93640.93330.9309
Layer 60.93040.94290.93610.93090.93090.93290.93090.9271
Figure 5

Prediction accuracy of various counts of neurons in convolutional layers.

The various learning rate also impacts the accuracy of the neural network. We tested learning rate between 0.05 and 0.15, with step of 0.02. The results of several learning rates are depicted in Table 6 and displayed in Figure 6. The results prove that the proposed model had the highest performance with learning rate equals to 0.07.
Table 6

The impact of learning rate on performance.

Learning Rate0.050.070.090.110.130.15
Accuracy0.9540.9720.9580.9310.9320.930
Figure 6

The impact of learning rate on performance.

5. The Proposed Models with Transfer Learning from Different Domain Sources

5.1. Performance Metrics

To analyze the performance of the proposed model, several performance metrics were utilized, which proved the efficiency of the model in predicting foot movement from the EEG. The evaluation metrics were recall, f1-score, precision, and accuracy (they are defined in the following equations). where is the number of true positive predictions, is the number of true negative predictions, is the number of false positive predictions, and V is th number of false negative predictions. The classification accuracy, recall, and F1-score of our model are depicted in Table 7. The mentioned table compares between the performance metrics of our model and transfer learning with one and two source domain.
Table 7

Classification report of our model with transfer learning with one and two source domain model.

Our Model with Transfer Learning with One Source Domain Our Model with Transfer Learning with Two Source Domain
Predicted movementPrecisionRecallF2-scorePrecisionRecallF2-score
Right foot flexion 0.90.950.90.970.990.96
Left foot flexion0.80.850.80.960.960.96
Right foot extension0.940.850.910.920.960.97
Left foot extension0.940.850.90.970.920.96
Right foot pronation0.890.930.910.960.940.97
Left foot pronation0.90.90.910.960.90.96
Right foot supination0.840.90.80.940.90.96
Left foot supination0.940.90.90.970.90.99

5.2. Confusion Matrix

The confusion matrices of predicting foot movement from the EEG is depicted in Table 8, Table 9 and Table 10, which display the true label (ground truth) at the y-axis and the predicted foot movement at the x-axis. The confusion matrices are for the proposed model without transfer learning (Table 8), the proposed model with transfer learning from one source domain (Table 9), and the proposed model with transfer learning from two source domains (Table 10).
Table 8

Confusion matrix for the proposed DL model without transfer learning.

Motor FunctionRight Foot Flexion Left Foot FlexionRight Foot ExtensionLeft Foot Extension Right Foot PronationLeft Foot PronationRight Foot SupinationLeft Foot SupinationTotal Cases
Right foot flexion 942503521200222
Left foot flexion3100433222135200
Right foot extension20510753021821208
Left foot extension1040 0150 10401139300
Right foot pronation2283010130103010250
Left foot pronation61911314110930200
Right foot supination21029106051705300
Left foot supination45104911305170320
Table 9

Confusion matrix for the proposed DL model with transfer learning with one source domain.

Motor FunctionRight Foot Flexion Left Foot FlexionRight Foot ExtensionLeft Foot Extension Right Foot PronationLeft Foot PronationRight Foot SupinationLeft Foot SupinationTotal Cases
Right foot flexion 1842103121100222
Left foot flexion11702102915200
Right foot extension8218071280208
Left foot extension280270 1937300
Right foot pronation101710220192250
Left foot pronation1725117527200
Right foot supination80921132652300
Left foot supination310111394280320
Table 10

Confusion matrix for the proposed DL model with transfer learning with two source domains.

Motor FunctionRight Foot Flexion Left Foot FlexionRight Foot ExtensionLeft Foot Extension Right Foot PronationLeft Foot PronationRight Foot SupinationLeft Foot SupinationTotal Cases
Right foot flexion 2110405020222
Left foot flexion0195011201200
Right foot extension2020003120208
Left foot extension010295 0202300
Right foot pronation1020244120250
Left foot pronation0102019601200
Right foot supination2011202922300
Left foot supination0112010315320

5.3. Time Complexity Versus Accuracy

In this research, it was essential to compute the time complexity for the deep learning model and how transfer learning could affect the training time complexity. Moreover, it was important to see the tradeoff between the deep learning model alone and the trade off when we incorporated the transfer learning for one or more sourced domains. The results are presented in Table 11 and Table 12.
Table 11

Time complexity of the proposed model with and without transfer learning.

Our Model with Transfer Learning with One Source DomainOur Model with Transfer Learning with Two Source Domain
Training CPU time (h)12:3218:57
Classification time (s)119.9 s90.3 s
Table 12

Performance comparison of the proposed model with and without transfer learning.

ModelAverage Accuracy for All Motor Functions (%)Average Training Time (h) Average Classification Time (s)
Our model without transfer learning 57.10%8.1113.1
Our model with transfer learning with one source domain90.90%12.9119.9
Our model with transfer learning with two source domain97.30%17.390.3

5.4. Performance Comparison of Different Models

The experiments have a big role in determining the hidden layers and the optimized count of neuron with learning rate in accordance. The selected parameters were applied to our proposed deep learning model. We comparatively evaluated our models with other DL models with transfer learning with the same parameter settings. The compared models were BP neural [13], TransferN [19], DLN [21], CNN [27], and STL [31]. The parameter settings were the same for all the compared models. Since transfer learning models need relatively lengthy training times, the training time and prediction time of different models are shown in Table 13.
Table 13

Performance comparison.

ModelBP NeuralTransferNDLNSTLOur Model without Transfer Learning Our Model with Transfer Learning with One Source DomainOur Model with Transfer Learning with Two Source Domain
Acc0.61360.64430.66660.66110.56680.910.97
Time(s)6410611313212011990.3

6. Conclusions

The goal of this research was to decode the left and right foot motor functions from EEG signals. The proposed deep learning model realized high prediction precision which can lead to a better a brain signal gain model (BSM) which can be employed in several limb assistive devices. The proposed research attained high accuracy by applying transfer learning from other source domains such as from elbow and knees source input domains. Our method realized higher accuracy of 97.4% by training through EEG signals of healthy cases performing motor feet functions. The presented classifier can be deployed in several classes of BSM as control signals for operative foot neuro pros. The research also concluded that the proposed BSM-EEG model with cascading transfer learning with deep learning can be competently employed on a small size input. This research indicates that the presented model can transfer learning for the same pattern. The experimental results depict that transfer learning should be incorporated in the paradigm of EEG processing. The BSM-EEG outperformed other state-of-the-art neural deep learning models in motor imagery detection. The experiments showed that we can utilize a small-sized dataset for training by incorporating feature extraction through other source domains. The mechanism of this study can be generalized by using n source domains instead of only two source domains.
  20 in total

1.  Robot assisted gait training with active leg exoskeleton (ALEX).

Authors:  Sai K Banala; Seok Hun Kim; Sunil K Agrawal; John P Scholz
Journal:  IEEE Trans Neural Syst Rehabil Eng       Date:  2009-02       Impact factor: 3.802

2.  Automatic synchronization of functional electrical stimulation and robotic assisted treadmill training.

Authors:  Mark E Dohring; Janis J Daly
Journal:  IEEE Trans Neural Syst Rehabil Eng       Date:  2008-06       Impact factor: 3.802

3.  A novel deep learning approach for classification of EEG motor imagery signals.

Authors:  Yousef Rezaei Tabar; Ugur Halici
Journal:  J Neural Eng       Date:  2016-11-30       Impact factor: 5.379

Review 4.  BCI for stroke rehabilitation: motor and beyond.

Authors:  Ravikiran Mane; Tushar Chouhan; Cuntai Guan
Journal:  J Neural Eng       Date:  2020-08-17       Impact factor: 5.379

5.  Control of an electrical prosthesis with an SSVEP-based BCI.

Authors:  Gernot R Müller-Putz; Gert Pfurtscheller
Journal:  IEEE Trans Biomed Eng       Date:  2008-01       Impact factor: 4.538

6.  Hierarchical approaches to estimate energy expenditure using phone-based accelerometers.

Authors:  Harshvardhan Vathsangam; E Todd Schroeder; Gaurav S Sukhatme
Journal:  IEEE J Biomed Health Inform       Date:  2014-07       Impact factor: 5.772

Review 7.  Treadmill training is effective for ambulatory adults with stroke: a systematic review.

Authors:  Janaine C Polese; Louise Ada; Catherine M Dean; Lucas R Nascimento; Luci F Teixeira-Salmela
Journal:  J Physiother       Date:  2013-06       Impact factor: 7.000

8.  Walking after stroke: what does treadmill training with body weight support add to overground gait training in patients early after stroke?: a single-blind, randomized, controlled trial.

Authors:  Marco Franceschini; Stefano Carda; Maurizio Agosti; Roberto Antenucci; Daniele Malgrati; Carlo Cisari
Journal:  Stroke       Date:  2009-06-25       Impact factor: 7.914

9.  The Capacity of Generic Musculoskeletal Simulations to Predict Knee Joint Loading Using the CAMS-Knee Datasets.

Authors:  Zohreh Imani Nejad; Khalil Khalili; Seyyed Hamed Hosseini Nasab; Pascal Schütz; Philipp Damm; Adam Trepczynski; William R Taylor; Colin R Smith
Journal:  Ann Biomed Eng       Date:  2020-01-30       Impact factor: 3.934

10.  U-Limb: A multi-modal, multi-center database on arm motion control in healthy and post-stroke conditions.

Authors:  Giuseppe Averta; Federica Barontini; Vincenzo Catrambone; Sami Haddadin; Giacomo Handjaras; Jeremia P O Held; Tingli Hu; Eike Jakubowitz; Christoph M Kanzler; Johannes Kühn; Olivier Lambercy; Andrea Leo; Alina Obermeier; Emiliano Ricciardi; Anne Schwarz; Gaetano Valenza; Antonio Bicchi; Matteo Bianchi
Journal:  Gigascience       Date:  2021-06-18       Impact factor: 6.524

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.