Literature DB >> 24259846

Frequency Band Analysis of Electrocardiogram (ECG) Signals for Human Emotional State Classification Using Discrete Wavelet Transform (DWT).

Murugappan Murugappan1, Subbulakshmi Murugappan, Bong Siao Zheng.   

Abstract

[Purpose] Intelligent emotion assessment systems have been highly successful in a variety of applications, such as e-learning, psychology, and psycho-physiology. This study aimed to assess five different human emotions (happiness, disgust, fear, sadness, and neutral) using heart rate variability (HRV) signals derived from an electrocardiogram (ECG). [Subjects] Twenty healthy university students (10 males and 10 females) with a mean age of 23 years participated in this experiment. [Methods] All five emotions were induced by audio-visual stimuli (video clips). ECG signals were acquired using 3 electrodes and were preprocessed using a Butterworth 3rd order filter to remove noise and baseline wander. The Pan-Tompkins algorithm was used to derive the HRV signals from ECG. Discrete wavelet transform (DWT) was used to extract statistical features from the HRV signals using four wavelet functions: Daubechies6 (db6), Daubechies7 (db7), Symmlet8 (sym8), and Coiflet5 (coif5). The k-nearest neighbor (KNN) and linear discriminant analysis (LDA) were used to map the statistical features into corresponding emotions.
[Results] KNN provided the maximum average emotion classification rate compared to LDA for five emotions (sadness - 50.28%; happiness - 79.03%; fear - 77.78%; disgust - 88.69%; and neutral - 78.34%).
[Conclusion] The results of this study indicate that HRV may be a reliable indicator of changes in the emotional state of subjects and provides an approach to the development of a real-time emotion assessment system with a higher reliability than other systems.

Entities:  

Keywords:  Discrete wavelet transform; Heart rate variability; Human emotions

Year:  2013        PMID: 24259846      PMCID: PMC3820413          DOI: 10.1589/jpts.25.753

Source DB:  PubMed          Journal:  J Phys Ther Sci        ISSN: 0915-5287


INTRODUCTION

Emotion is essential for our daily interaction with people and even with computers. Several studies have addressed the subject of emotions and their role in the development of Human-Computer Interaction (HCI) and Brain Computer Interface (BCI)1, 2). Emotion directly affects our decision making, perception, cognition, creativity, attention, reasoning, and memory3). Several studies have reported using facial expression, speech, and gestures for assessing emotions. However, in these studies, the emotions were easily mimicked by other subjects and did not reflect the inherent emotional state of the subjects4). In recent years, physiological signals measured by electrocardiography (ECG), electromyography (EMG), galvanic skin response (GSR), electroencephalography (EEG), respiration rate (RR), and other methods have been used to assess changes in subjects' emotional states in a more reliable and non-invasive manner2, 4, 9). Physiological signals reflect the inherent changes in physiological activities of the subject under different emotional states, and a subject cannot control these activities related to his/her emotional states. Studies often use three different methods to model emotions: valence (unpleasant or negative to pleasant or positive), arousal (drowsy or peaceful to excited or alert), and discrete mode (happiness, sadness, fear, anger, surprise, and disgust)7, 10, 11). Most studies have used a valence-arousal–based emotional assessment using physiological signals because of its easier protocol design and simple signal processing methods. The mapping of discrete mode emotions in the valance-arousal model is very challenging because it is possible to obtain overlap of different emotions and multiple emotional experiences arising from the same stimuli. Emotion research to date has mainly focused on the field of psychology, yet the mechanisms of psychology and psycho-physiology span across numerous other disciplines, such as physiology, psycho-physiology, and others2). The challenge of an interdisciplinary research area is to standardize a common vocabulary and to develop the research framework that a mature discipline requires. However, there are several major limitations to this kind research work, including: (i) How to induce and measure the emotions? (ii) How to remove the effects of noises and artefacts from the physiological signals? and (iii) How do we distinguish different emotions based on heart rate variability (HRV) signal characteristics? A physiological reaction (activation or arousal; e.g., increase in heart rate) is a change in activity in the autonomic nervous system (ANS) that accompanies emotions. During positive (happiness, surprise) and negative emotions (sadness, fear, anger, and disgust), significant changes occur in the characteristics of the low frequency (LF; 0.03–0.12 Hz) and high frequency (HF; 0.12–0.488 Hz) bands of HRV signals12, 13). Emotions show a range of physiological manifestations that can be measured with a diverse array of techniques. Herbelin et al. used five physiological signals to assess emotions, including skin conductivity level, EMG, skin temperature (ST), breathing frequency, and pulse rate (PR)10). Takahashi K et al. used EEG, electrooculography (EoG), EMG, pulse oximetry, and skin conductance to measure different emotions, including joy, anger, sadness, happiness, and relaxation4). In addition, they collected EEG signals from 64 electrodes over the entire scalp and used them to assess five different emotions achieving a maximum mean classification rate of 88.9% using the k-nearest neighbour (KNN) classifier4). By using ECG, Jing et al. used discrete wavelet transform (DWT) and KNN to classify emotions and achieved a maximum average classification rate of 85.78%8). Emotion assessment using multiple physiological signals usually increases the computational complexity (computational time and processor memory requirement) and limits the subject's freedom (movements) during the experiment. Moreover, no study to date has performed a frequency analysis of HRV signals for discrete emotion classification. Therefore, this study aimed to analyze the different frequency ranges of HRV signals in order to efficiently classifies emotions. We used audio-visual stimuli (video clips) to evoke five different emotions (happiness, disgust, fear, surprise, neutral). A set of statistical features were derived using DWT over two different frequency bands (LF: 0.03–0.12 Hz; and HF: 0.12–0.488 Hz) extracted from the HRV signal. The statistical features were extracted using the following four wavelet functions: db6, db7, sym8, and coif5. These features were classified using two simple classifiers, namely KNN and linear discriminant analysis (LDA). Finally, we compared the classification rates of these two different classifiers over different wavelet functions.

SUBJECTS AND METHODS

This work began with data acquisition followed by preprocessing, feature extraction, and emotion classification. Audio-visual stimuli based data acquisition protocol of first trial In emotion assessment research, different types of stimuli for inducing emotions must be considered, such as audio (music clips/songs), visual (pictures/images), audio-visual (film clips/video clips), and emotional recall5, 6). Many studies have used audio-visual stimuli of shorter or longer time durations in order to induce discrete emotions10, 11, 14, 15). In a study by Li et al., a “Tom and Jerry” cartoon video was used to induce the emotion of joy in subjects5). In this work, we used audio-visual stimuli (video clips) to induce five different emotions. A total of 50 video clips were collected for five different emotions and each video clip had different time duration. In this study, we conducted 10 trials to induce each emotion, with each trial presenting five different emotion video clips. Before the video clips were played, some instructions were given to the subjects in order to relaxed them. In between every video (stimuli), two images of nature scenes, such as hills, skies, ocean, or mountains, were displayed for 8 s in duration. The presentation of scenery was used to avoid feedback from previous emotional stimuli delivered to the subject before continuing to the next stimulus presentation5). Figure 1 shows the protocol design for the first trial of this experiment. The time duration for each video varied. X1 to X5 denotes the time period of the emotional stimuli, and the maximum and minimum time periods of the emotional stimuli were 60 s and 30 s, respectively. Some of the emotional stimuli were obtained from the Department of Psychology at Stanford University. All of the emotional clips for the remaining trials were arranged in a random manner over the complete protocol.
Fig. 1.

Audio-visual stimuli based data acquisition protocol of first trial

ECG signals were collected from 20 university students (10 male and 10 female) by 3 electrodes. All participants were in good health and had a mean age of 23 years. An AD Instrument was used to acquire the ECG signals using three electrodes at a sampling frequency of 1,000 Hz. The two active electrodes were placed on the left and right wrists and the reference electrode was placed on the right ankle according to the Einthoven triangle7). Stimuli were shown to each subject on a Liquid Crystal Display (LCD) projector screen after placement of the ECG electrodes. After each trial of this experiment, the subjects were asked to complete a self-assessment form to specify the emotion experienced during each video clip as well as a rating of the intensity. The ECG signals were collected throughout this protocol without causing any discomfort to the subjects. ECG signals are mostly contaminated with different types of noise, such as power line frequency, mismatch of electrode impedance, wandering of the baseline signal, and motion artefacts. According to Chavan et al., the baseline wander and power line interference of 50 Hz or 60 Hz is reduced in ECG signals by using a 3rd order Butterworth filter16). In a study by Zhang et al., a set of low-pass and high-pass filters were used to remove the baseline wander and power line frequency interference17). In another study, a low pass filter was used to eliminate the peak noises from the ECG signals11). In this present study, we used a 3rd order Butterworth filter to remove the effects of noise and baseline wander from the ECG signals with cut-off frequencies of 0.002 Hz and 100 Hz. The HRV signals were derived from the ECG signals using the Pan-Tompkins algorithm37). Several feature extraction techniques have been proposed to extract statistical features from physiological signals, such as the Hilbert-Huang Transform6) and DWT10, 11, 14, 17, 18). Min et al. used conventional features, such as mean, standard deviation, median, minimum, maximum, maximum interval, maximum resolution, and spectrum average of the PQRST waves derived from the ECG signal for classifying emotions7). In this study, DWT was used to extract statistical features from HRV signals. DWT is a non-linear transform over the space L2 (R) that gives a time-resolved description for a large variety of signals19). Jing et al. reported that the db6 wavelet has even symmetry and has a shape similar to the QRS complex from an ECG signal8). In addition, other wavelet functions have also been used, such as db7, sym8, and coif5, for decomposing the HRV signals to extract the LF and HF frequency bands which are used to assess emotion recognition20). The coif5 wavelet has also been used for R wave detection in ECG signals21). Based on the literature, a group of 14 wavelet functions from three wavelet families (Daubechies, Coiflets, and Symlets) are commonly used for decomposing HRV signals22). In addition, several types of wavelet function have been investigated in HRV analyses23,24,25,26,27). However, very few studies have classified emotions based on HRV signals using DWT23, 24). An initial set of analyses was conducted with the db6 mother wavelet function and then extended with the remaining three wavelet functions (db7, sym8, and coif5) to make a performance comparison of the features extracted from HRV signals. These wavelet functions were chosen based on the resemblance of characteristics of wavelet coefficients in the LF and HF bands with the mother wavelet function. Greater performance in feature extraction can be easily achieved if the wavelet coefficient characteristics are highly similar to the mother wavelet function characteristics. By using DWT, the single prototype function called the mother wavelet is used to decompose the input signal based on scaling and shifting parameters28). The mother wavelet function Ψa, b (t) is given as: where a, b ϵ R, a > 0, and R is the wavelet space. Parameters ‘a’ and ‘b’ are the scaling factor and the shifting factor, respectively, since choosing a prototype function as the mother wavelet should always satisfy the admissibility condition (Equation 2), where Ψ (ω) is the Fourier transform of Ψa, b (t). First, a decomposition tree of the wavelet transform is constructed using low-pass and high-pass filters to derive the frequency sub-bands of the input signal. When the signal is filtered repeatedly by a pair of digital filters (low-pass and high-pass filters) that cut out the frequency domain in the middle, the time-frequency representation is obtained. The filtered coefficient derived from the low-pass filter is termed the Approximation Coefficient (A) and the filtered output derived from the high-pass filter is called the Detailed Coefficient (D). The approximation coefficient is subsequently divided into new approximations and detailed coefficients at the next level and so on. The number of iterations or decomposition is purely dependent on the frequency range of interest in the input signal28). In this study, we performed 14 levels of decomposition of input HRV signals to derive the wavelet coefficients and statistical features of the two frequency sub-bands (LF and HF) used for classifying emotions. In the literature, the LF band oscillations are reported to be in the range of 0.04–0.15 Hz and the HF band oscillations are reported to be in the range of 0.15–0.4 Hz5, 9). Therefore, the frequency range of HRV signals in LF and HF used in this study fell below the universal frequency range of the LF and HF bands. In general, very low frequency (VLF) (0.004–0.04 Hz) detail is not useful for extracting meaningful information regarding emotional changes based on HRV signals, and therefore it was not considered in this analysis. In HRV signal analysis, most studies have considered the average frequency band power and standard deviation as the main statistical features for several applications29). Therefore, we also considered these features in classifying emotions. Table 1 shows the description of statistical features and the mathematical formula for computation. The frequency bandwidth and corresponding decomposition level of the ECG signal is provided in Table 2.
Table 1.

Statistical features used for emotion recognition and their descriptions

Table 2.

Class distribution of the samples in the training and test data set for each feature

Frequency Range (Hz)Decomposition LevelFrequencyBands
0.03–0.06D14Low frequency bands
0.06–0.12D13
0.12–0.24D12High frequency bands
0.24–0.488D11
0.488–0.9766D10Unused frequency bands
0.9766–1.953D9
1.953–3.90625D8
3.90625–7.8125D7
7.8125–15.625D6
15.625–31.25D5
31.25–62.5D4
62.5–125D3
125–250D2
250–500D1
Several types of classifiers are used to classify emotions from physiological signal features, such as the artificial neural network, KNN, LDA, fuzzy clustering, multilayer perception neural network, and the support vector machine (SVM)6, 7, 11, 19, 30). Murugappan et al. used LDA and KNN to classify five emotions based on EEG signals11). In another study, fuzzy-c-means (FCM) and fuzzy-k-means (FKM) clustering methods were used to classify the four most dominant emotions14). In this study we used LDA and KNN to classify the emotions based on HRV signal features. The KNN-based classification is a very simple yet powerful classification method. The key idea behind KNN classification is that similar observations belong to similar classes. Thus, one simply has to look for the class designators of a certain number of the nearest neighbors and weigh their class numbers in order to assign a class number to the unknown30). In this study, a range of K-values (K = 2–5) were assessed and the results were documented for comparison. The value of K at which the maximum classification accuracy is achieved is considered to be the optimal value of K for emotion classification. LDA is also a classification method that is used to discriminate between two or more groups of samples. The group discrimination can be defined either naturally by the problem under investigation, or by some preceding analysis, such as a cluster analysis31). The number of groups is not restricted to two, although the discrimination between two groups is the most common approach31). Besides training and testing samples, LDA does not require any external parameters for classifying discrete emotions32). Researchers have proposed extended versions of LDA, such as pseudo-inverse LDA and Kalman Adaptive LDA, in order to avoid issues related to singularity problems and adjust the weights through an adaptive approach33, 34). One statistical feature was extracted from each emotion detected in 20 subjects over 10 trials and then concatenated to form a feature vector of 1,000 samples in size. This feature vector was split into two parts: the training and testing vectors. Seventy percent (700 out of 1,000) of samples in the feature vector were considered as the training set and the remaining 30% (300 out of 1,000) of samples were used in the testing set for this emotion classification.

RESULTS

Standard deviation and frequency band power were used to discriminate among the emotions extracted from HRV signals using LDA and KNN classifiers. Tables 3 and 4 show the classification accuracy of emotions from HRV signal features using these classifiers. Among the different values of K (2 to 5), we achieved the maximum classification accuracy with K = 5. Among the two different statistical features, standard deviation and the sum of average power of the LF and HF bands generated the highest emotion classification rate compared to the other statistical features. In our overall analysis, HF band features alone did not give a good classification accuracy for any of the emotions. However, the frequency range of both LF and HF bands (0.03–0.49 Hz) gave the maximum classification rate for distinguishing between the disgust and neutral emotions. Standard deviation analysis resulted in good discrimination between happiness and fear among participants. We essayed four different wavelet functions from the group of wavelet families to extract the emotional relevant features from HRV signals: db6, db7, sym8, and coif5. As shown in Table 4, coif5 performed better that the other wavelet functions for three emotions (sadness, happiness, and disgust), db7 performed the best for fear, and sym8 performed the best for the neutral emotion. Changes in emotional state based on the HRV signal are efficiently captured by using the coif5 wavelet function because of its characteristic waveform matching and symmetry nature. This wavelet function gave the maximum classification rate of 88.89% for disgust, 79.03% for happiness, and 50.28% for sadness emotions. In addition, a classification rate of 78.34% for neutral emotions was achieved using the total frequency band power (LF + HF) derived from the sym8 wavelet function. Therefore, selecting the appropriate wavelet functions for efficient emotion discrimination is very important in this type of research.
Table 3.

Averaged classification accuracy of emotions using KNN (K=5)

Frequency band StatisticalFeaturesWaveletEmotionsAverage accuracy (%)
DisgustSadHappyFearNeutral
Low-frequency band(0.03–0.12) HzStandarddeviationdb674.5931.2574.7266.8174.8764.45
db776.9531.5371.9577.7871.5365.95
sym871.6729.8676.5375.7075.1465.90
coif572.7831.5379.0375.0076.3966.95
Total averagepowerdb671.1131.5371.9568.0677.3664.00
db773.6132.0977.2272.0975.9866.20
sym874.0331.2576.5373.3473.4865.73
coif576.6730.5676.5373.8976.1266.75
High-frequency band(0.12–0.49) HzStandarddeviationdb657.5027.5061.2560.8464.5954.34
db759.1724.1756.1260.0058.0651.50
sym872.2226.8165.2862.7868.1959.06
coif565.5625.8461.8168.7561.6756.73
Total averagepowerdb657.5025.1462.5062.7862.7854.14
db756.3924.4559.0358.4757.7851.22
sym871.3928.2066.8163.2063.6158.64
coif561.9525.9860.4266.5364.4555.87
RatioHF/LFdb671.5326.2575.1462.9268.8960.95
db772.0927.3666.8167.5070.0060.75
sym864.0327.3671.5373.0674.5962.11
coif569.0327.6473.6174.4576.1164.17
SumLF+HFdb686.5331.8173.6165.5672.0865.92
db782.5031.9574.4567.5072.3765.75
sym876.1230.7076.3970.8378.3466.48
coif588.8930.4275.2872.3672.0967.81
Table 4.

Averaged classification accuracy of emotions using LDA

Frequency band StatisticalFeaturesWaveletEmotionsAverage accuracy (%)
DisgustSadHappyFearNeutral
Low-frequency band(0.03–0.12) HzStandarddeviationdb668.4848.7571.8170.7071.1166.17
db775.1447.0874.0375.8371.2568.66
sym876.1148.3372.7874.8676.5369.72
coif574.3148.7574.1774.3177.2369.75
Total averagepowerdb665.5648.7571.6773.3367.5065.36
db769.7346.3970.5672.6468.4765.56
sym871.1148.8972.6472.5072.2267.47
coif571.6750.2871.6773.2071.3967.64
High-frequency band(0.12–0.49) HzStandarddeviationdb657.3643.8964.1755.0063.2056.72
db755.5643.7560.5657.5059.1755.31
sym859.8642.5061.3961.5363.7557.81
coif558.8943.7559.4561.8167.2258.22
Total averagepowerdb655.7042.7860.0053.3462.3654.84
db755.7041.6756.3956.9557.5053.64
sym856.9539.5958.0657.9264.0355.31
coif556.3944.4457.7856.5361.1155.25
RatioHF/LFdb673.8943.6270.9766.6778.6266.75
db774.0347.7869.0370.7073.6167.03
sym875.2844.3172.0971.6771.5366.98
coif577.9245.5671.5374.5872.3668.39
SumLF+HFdb663.7547.2373.8970.8466.6764.48
db763.0646.6770.9772.5069.3164.50
sym869.8648.4774.7269.8668.4866.28
coif565.7049.3172.6471.8172.2266.34
Among the five different emotions, KNN performed better than LDA for four emotions: happiness, fear, disgust, and neutral. However, the maximum emotion classification rate of sadness was achieved using LDA (50.28%). Among the five different emotions, sadness had the lowest classification accuracy of 50.28% when using LDA and an extraction of the coif5 wavelet function. The audio-visual stimuli used to induce the sadness did not induce a strong emotional response in the subjects in this experiment.

DISCUSSION

In this study, most of the emotional features had a marked number of overlapping characteristics, therefore a linear boundary could not distinguish each emotion. Consequently, the classification rate of the LDA classifier in most of the classes had poor accuracy compared to KNN. In addition, the KNN classifier classifies emotions based on a voting scheme and the value of “K”. Several studies have followed a ‘trial and error’ approach to choose the appropriate value of K, but few have determined the effective value of K through artificial intelligence approaches34). The performance of KNN classification depends on the size of the feature vector. Larger feature vectors result in a poor classification rate for KNN. Therefore, the optimal value of the feature vector is critical for achieving good classification accuracy34). Normal subjects have high variability of emotional perception across a large population pool, but KNN provided the maximum classification rate of 75% or higher for most of the classes. The LDA classifier is simple to use, has fewer computational requirements, and provides good results for several classification applications35). However, LDA is not optimal for nonlinear EEG data due to its linear nature. The greatest limitation of LDA is that it only allows linear or quadratic relationships between the input and output33, 34). In a study by Abdallah et al., audio-visual stimuli (film clips) were used to evoke the three most common emotions in a group of subjects: pleasant, unpleasant, and calm23). In their study, the maximum mean classification rate achieved using two physiological signals (HRV and GSR) was 80.2%. More recently, a study classified six basic emotions using ECG signals and achieved a maximum mean classification rate of 61.4% using visual stimuli24). In another study, multiple physiological signals (BVP, EMG, and RR) were used to classify the visual stimuli-induced emotional states into three types: pleasure, non-pleasure, and neutral36). However, although visual stimuli have been mainly used to induce emotions in previous studies, they achieve a low classification rate compared to audio-visual stimuli. For HRV signals, no study has yet analyzed the individual frequency bands for classifying emotions with DWT. Localization of the specific frequency range in ECG signals is important for reducing the computation time and complexity of emotion assessment. Furthermore, there is currently no universal emotion database, and many studies have focused on developing unique databases for emotion classification. In this study, we achieved maximum average classification rates of 69.754% and 67.808% for classifying five different emotional states using KNN and LDA, respectively. For both classifiers, coif5 provided the maximum mean classification rate. The mean classification rates of the different wavelet functions did not vary drastically in the present study. We found that LF band features provided more useful information for distinguishing emotions than HF, LF+HF, and frequency band ratios. Compared to previous studies, the method investigated in this study used efficient emotional stimuli to induce different emotions and found that a simple signal processing and classification method is highly useful for reducing the computational complexity and time. Based on our current understanding, mapping the discrete emotions on a two-dimensional (valance-arousal) plane is complex. This is because the emotional perception of each subject varies from other subjects and each subject has a diverse emotional experience over time when confronted with the same emotional stimuli. In addition, the estimation of physiological activities through multiple physiological signals may provide good results for emotion classification. However, the handling of multiple single features in a signal processing method increases the computation time and complexity. Therefore, the search continues for one salient physiological signal which can efficiently classify multiple emotions among multiple physiological signals. We used audio-visual stimuli to induce the five most dominant emotions in 20 subjects in order to assess changes in their emotional state using ECG signals. The ECG signals were preprocessed using a Butterworth 3rd order filter and DWT was used to extract the statistical features from four different wavelet functions. The extracted features were mapped into corresponding emotions using KNN and LDA classifiers. Using this approach, we extracted standard deviation and average frequency band power from LF and HF frequency bands. Based on these results, we found that the sum of the LF and HF frequency band power and standard deviation performed better than other statistical features for classifying emotions based on gender. A maximum classification rate of 88.89% was achieved for disgust in males using KNN. The experimental results also indicated that the KNN classifier performed better than LDA at classifying most of the emotions. Amongst the five different emotions, sadness achieved the lowest emotion classification rate compared to the other emotions, indicating that the stimulus used in this experiment to evoke sadness was not efficient. The selection of wavelet function, emotional stimuli, and statistical feature computation plays a major role in achieving a good emotion classification rate. In a study by Cong et al., four physiological signals (ECG, EMG, SC, and respiration) were used to classify four emotional states, achieving a maximum mean classification rate of 76%38). Real-time emotion recognition using ECG signals was also developed by Kanlaya et al. and achieved a mean classification rate of 61.44%24). Another study achieved a maximum mean classification rate of 97.8% when two emotions (happiness and sadness) were classified39). The method investigagted in this study used a single physiological signal to classify five emotions and achieved a mean classification rate of 66.48% using simple classfiers. This analysis was performed “offline” using the MATLAB signal processing toolbox. In future studies, we plan to utilize ECG signal morphology features, such as QRS complexity, R-R peak detection, S-T time interval, R wave detection, R-R peak amplitude, and QRS slope deviation as statistical features to examine the correlation between ECG signals and specific emotions.
  8 in total

1.  Emotion recognition system using short-term monitoring of physiological signals.

Authors:  K H Kim; S W Bang; S R Kim
Journal:  Med Biol Eng Comput       Date:  2004-05       Impact factor: 2.602

2.  Psychophysiological responses to robotic rehabilitation tasks in stroke.

Authors:  Domen Novak; Jaka Ziherl; Andrej Olensek; Maja Milavec; Janez Podobnik; Matjaz Mihelj; Marko Munih
Journal:  IEEE Trans Neural Syst Rehabil Eng       Date:  2010-04-12       Impact factor: 3.802

Review 3.  A review of classification algorithms for EEG-based brain-computer interfaces.

Authors:  F Lotte; M Congedo; A Lécuyer; F Lamarche; B Arnaldi
Journal:  J Neural Eng       Date:  2007-01-31       Impact factor: 5.379

4.  Using neural network to recognize human emotions from heart rate variability and skin resistance.

Authors:  Chung Lee; S K Yoo; Yoonj Park; Namhyun Kim; Keesam Jeong; Byungchae Lee
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2005

5.  Emotion recognition based on physiological changes in music listening.

Authors:  Jonghwa Kim; Elisabeth André
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2008-12       Impact factor: 6.226

6.  Emotion classification based on gamma-band EEG.

Authors:  Mu Li; Bao-Liang Lu
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2009

7.  A real-time QRS detection algorithm.

Authors:  J Pan; W J Tompkins
Journal:  IEEE Trans Biomed Eng       Date:  1985-03       Impact factor: 4.538

8.  Reference signal extraction from corrupted ECG using wavelet decomposition for MRI sequence triggering: application to small animals.

Authors:  Dima Abi-Abdallah; Eric Chauvet; Latifa Bouchet-Fakri; Alain Bataillard; André Briguet; Odette Fokapu
Journal:  Biomed Eng Online       Date:  2006-02-20       Impact factor: 2.819

  8 in total
  2 in total

1.  Effect of anticipation triggered by a prior dyspnea experience on brain activity.

Authors:  Hideki Nakai; Kengo Tsujimoto; Takeshi Fuchigami; Satoko Ohmatsu; Michihiro Osumi; Hideki Nakano; Manami Fukui; Shu Morioka
Journal:  J Phys Ther Sci       Date:  2015-03-31

2.  An Ensemble Learning Approach for Electrocardiogram Sensor Based Human Emotion Recognition.

Authors:  Theekshana Dissanayake; Yasitha Rajapaksha; Roshan Ragel; Isuru Nawinne
Journal:  Sensors (Basel)       Date:  2019-10-16       Impact factor: 3.576

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.