| Literature DB >> 35204415 |
Gopal Chandra Jana1, Anupam Agrawal1, Prasant Kumar Pattnaik2, Mangal Sain3.
Abstract
Brain Computer Interface technology enables a pathway for analyzing EEG signals for seizure detection. EEG signal decomposition, features extraction and machine learning techniques are more familiar in seizure detection. However, selecting decomposition technique and concatenation of their features for seizure detection is still in the state-of-the-art phase. This work proposes DWT-EMD Feature level Fusion-based seizure detection approach over multi and single channel EEG signals and studied the usability of discrete wavelet transform (DWT) and empirical mode decomposition (EMD) feature fusion with respect to individual DWT and EMD features over classifiers SVM, SVM with RBF kernel, decision tree and bagging classifier for seizure detection. All classifiers achieved an improved performance over DWT-EMD feature level fusion for two benchmark seizure detection EEG datasets. Detailed quantification results have been mentioned in the Results section.Entities:
Keywords: EEG classification; discrete wavelet transform; electroencephalogram; empirical mode decomposition; seizure detection
Year: 2022 PMID: 35204415 PMCID: PMC8871311 DOI: 10.3390/diagnostics12020324
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Shows all coefficients of five levels of DWT over experimental EEG signals. (a) shows the approximate coefficient, and (b–f) shows other detailed coefficients of DWT level 5.
Figure 2Shows five IMFs of EMD applied on chb01_01 of Dataset2. IMF0, IMF1, IMF2, IMF3 and IMF4 are shown in (a–e) accordingly.
Mathematical formula of considered features.
| Considered Features | Mathematical Representation | Equation No. |
|---|---|---|
|
|
| (2) |
| In Equation (2), mean is denoted as | ||
|
|
| (3) |
| In Equation (3), variance is denoted as | ||
|
|
| (4) |
| In Equation (4), standard deviation is denoted as | ||
| Root Mean Square (RMS) |
| (5) |
| In Equation (5), Root Mean Square is denoted as RMS. | ||
| Curve length |
| (6) |
| In Equation (6), curve length is denoted as | ||
| Minima |
| (7) |
| In Equation (7), Minima denoted as | ||
| Skewness |
| (8) |
| In Equation (8), Skewness is denoted as | ||
| Kurtosis |
| (9) |
| In Equation (9), Kurtosis is denoted as | ||
Figure 3Illustrative diagram of the proposed approach.
Estimated performance over Dataset-1 under case-1.
| Classifier Used | Best Performance with Hyperparameters | Accuracy * | F1 Score * | MCC * |
|---|---|---|---|---|
| SVM + RBF | C = 100, kernel = ‘rbf’ | 82.85 | 64.28 | 85.71 |
| SVM | default | 82.85 | 83.33 | 70.71 |
| Decision Tree | criterion = ‘gini’, max_depth = 4 | 80.00 | 55.55 | 11.78 |
| Bagging Classifier | base_estimator = dt, n_estimators = 300, max_samples = 0.5 | 80.00 | 82.92 | 58.92 |
* Unit of measurement considered as percentage.
Estimated performance over Dataset-2 under case-1.
| Classifier Used | Best Performance with Hyperparameters | Accuracy * | F1 Score * | MCC * |
|---|---|---|---|---|
| SVM + RBF | C = 100, kernel = ‘rbf’ | 99.79 | 99.80 | 99.58 |
| SVM | default | 98.95 | 98.99 | 97.93 |
| Decision Tree | criterion = ‘gini’, max_depth = 4 | 99.37 | 99.40 | 97.93 |
| Bagging Classifier | base_estimator = dt, n_estimators = 300, max_samples = 0.5 | 99.37 | 99.40 | 98.74 |
* Unit of measurement considered as percentage.
Estimated performance over Dataset-1 under case-2.
| Classifier Used | Best Performance with Hyperparameters | Accuracy * | F1 Score * | MCC * |
|---|---|---|---|---|
| SVM + RBF | C = 100, kernel = ‘rbf’ | 80.00 | 81.08 | 63.21 |
| SVM | Default | 82.85 | 84.21 | 67.68 |
| Decision Tree | criterion = ‘gini’, max_depth = 4 | 85.71 | 87.17 | 72.34 |
| Bagging Classifier | base_estimator = dt, n_estimators = 300, max_samples = 0.5 | 91.42 | 92.68 | 82.49 |
* Unit of measurement considered as percentage.
Estimated performance over Dataset-2 under case-2.
| Classifier Used | Best Performance with Hyperparameters | Accuracy * | F1 Score * | MCC * |
|---|---|---|---|---|
| SVM + RBF | C = 100, kernel = ‘rbf’ | 98.33 | 98.41 | 98.75 |
| SVM | default | 99.37 | 99.39 | 98.75 |
| Decision Tree | criterion = ‘gini’, max_depth = 4 | 99.58 | 99.60 | 99.16 |
| Bagging Classifier | base_estimator = dt, n_estimators = 300, max_samples = 0.5 | 99.37 | 99.40 | 98.74 |
* Unit of measurement considered as percentage.
Estimated performance over Dataset-1 under case-3.
| Classifier Used | Best Performance with Hyperparameters | Accuracy * | F1 Score * | MCC * |
|---|---|---|---|---|
| SVM + RBF | C = 100, kernel = ‘rbf’ | 91.42 | 91.42 | 83.00 |
| SVM | Default | 91.42 | 90.32 | 82.78 |
| Decision Tree | Criterion = ‘gini’, max_depth = 4 | 91.42 | 92.30 | 84.01 |
| Bagging Classifier | base_estimator = dt, n_estimators = 300, max_samples = 0.5 | 94.28 | 94.73 | 89.11 |
* Unit of measurement considered as percentage.
Estimated performance over Dataset-2 under case-3.
| Classifier Used | Best Performance with Hyperparameters | Accuracy * | F1 Score * | MCC * |
|---|---|---|---|---|
| SVM + RBF | C = 100, kernel = ‘rbf’ | 99.37 | 99.38 | 98.75 |
| SVM | default | 100 | 100 | 100 |
| Decision Tree | Criterion = ‘gini’, max_depth = 4 | 99.58 | 99.56 | 99.16 |
| Bagging Classifier | base_estimator = dt, n_estimators = 300, max_samples = 0.5 | 100 | 100 | 100 |
* Unit of measurement considered as percentage.
Comparison with existing approaches.
| Proposed by | Decomposition Methods | Methods for Feature Extraction from Coefficients/IMFs | Feature Concatenation from Decompositions Methods | Datasets | Classifiers | Performance | ||
|---|---|---|---|---|---|---|---|---|
| ACC (%) | F1 Score (%) | MCC (%) | ||||||
| Vipin Gupta et al. [ | FAWT | Cross correntropy, log energy entropy, SURE | No (Single Decomposition method used) | Dataset-2 (single channel) | LS-SVM, KNN | 94.41, 93.80 | - | 89, 88 |
| Anurag Nishad et al. [ | TQWT | Cross-information potential | No (Single Decomposition method used) | Dataset-2 (single channel) | RF | 99 | - | - |
| Mehdi Omidvar et al. [ | DWT | Standard deviation, mean, | No (Single Decomposition method used) | Dataset-2 (single channel) | ANN, SVM | 100, 100 | - | - |
| Duo Chen et al. [ | DWT | Max, min, mean, standard deviation, skewness, kurtosis, Energy, normalized standard deviation and normalized energy | No (Single Decomposition method used) | Dataset-1 (multi-channel) | SVM with RBF kernel | 92.30 and 99.33 (overall accuracy over Dataset-1 and Dataset-2, respectively) | - | - |
| Muhammad Kaleem et al. [ | EMD | Projection coefficients value (for details refer [ | No (Single Decomposition method used) | Dataset-1 (multi-channel) | SVM | 92.91 | - | - |
| Inung Wijayanto et al. [ | EMD, coarse-grained (CG) | Fractal Dimension from EMD and CG | No (extracted features individually fed into classifiers) | Dataset-2 (single channel) | KNN, RF and SVM | 99, 99 and 100 | - | - |
| Asmat Zahra et al. [ | MEMD | Instantaneous frequency and amplitude extracted using Hilbert transfor | No (Single Decomposition method used) | Dataset-2 (single channel) | ANN | 87.20 | - | - |
| C. Shahnaz et al. [ | EMD-Wavelet Analysis | DWT applied over IMFs and after that variance, skewness and kurtosis extracted from level 4 DWT coefficients | Partially (but different from our proposed work) | Dataset-2 (single channel) | KNN | 100 | - | - |
| Shaik. Jakeer Hussain et al. [ | DWT and EMD | Mean weighted frequency | No (two ecomposition methods used separately) | Dataset-1 (multi-channel) | ANN | 97.18 | - | - |
| Marzhan Bekbalanova et al. [ | DWT and EMD | Mean, variance, skewness and kurtosis | No (two Decomposition methods used separately) | Dataset-2 (single channel) | SVN, KNN and decision tree | DWT: 99, 97.5, 100 | - | - |
| Proposed | DWT and EMD | Mean, variance, standard deviation, curve length, skewness, kurtosis, minima and rms | DWT coefficient-based feature matrix and EMD IMF-based feature matrix has been concatenated | Dataset-1 (multi-Channel) | SVM, SVM-RBF, decision tree, bagging classifier | 91.42, 91.42, 91.42, 94.28 | 91.42, 90.32, 92.30, 94.73 | 83.00, |
| Dataset-2 (single Channel) | SVM, SVM-RBF, decision tree, bagging classifier | 99.37, 100, 99.58, 100 | 99.38, 100, 99.56, | 98.75, | ||||