| Literature DB >> 28326009 |
Rifai Chai1, Sai Ho Ling1, Phyo Phyo San2, Ganesh R Naik1, Tuan N Nguyen1, Yvonne Tran3, Ashley Craig4, Hung T Nguyen1.
Abstract
This paper presents an improvement of classification performance for electroencephalography (EEG)-based driver fatigue classification between fatigue and alert states with the data collected from 43 participants. The system employs autoregressive (AR) modeling as the features extraction algorithm, and sparse-deep belief networks (sparse-DBN) as the classification algorithm. Compared to other classifiers, sparse-DBN is a semi supervised learning method which combines unsupervised learning for modeling features in the pre-training layer and supervised learning for classification in the following layer. The sparsity in sparse-DBN is achieved with a regularization term that penalizes a deviation of the expected activation of hidden units from a fixed low-level prevents the network from overfitting and is able to learn low-level structures as well as high-level structures. For comparison, the artificial neural networks (ANN), Bayesian neural networks (BNN), and original deep belief networks (DBN) classifiers are used. The classification results show that using AR feature extractor and DBN classifiers, the classification performance achieves an improved classification performance with a of sensitivity of 90.8%, a specificity of 90.4%, an accuracy of 90.6%, and an area under the receiver operating curve (AUROC) of 0.94 compared to ANN (sensitivity at 80.8%, specificity at 77.8%, accuracy at 79.3% with AUC-ROC of 0.83) and BNN classifiers (sensitivity at 84.3%, specificity at 83%, accuracy at 83.6% with AUROC of 0.87). Using the sparse-DBN classifier, the classification performance improved further with sensitivity of 93.9%, a specificity of 92.3%, and an accuracy of 93.1% with AUROC of 0.96. Overall, the sparse-DBN classifier improved accuracy by 13.8, 9.5, and 2.5% over ANN, BNN, and DBN classifiers, respectively.Entities:
Keywords: autoregressive model; deep belief networks; driver fatigue; electroencephalography; sparse-deep belief networks
Year: 2017 PMID: 28326009 PMCID: PMC5339284 DOI: 10.3389/fnins.2017.00103
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1General structure EEG-based driver fatigue classification in this study.
Figure 2Moving window segmentation for driver fatigue study.
Figure 3Structure of sparse-DBN for driver fatigue classification: (A) Greedy learning stack of sparse-RBM; (B) the corresponding sparse-DBN.
Testing several values of regularization constant (λ) and the constant controlling the sparseness (.
| 0.5 | 0.1 | 0.00492 | 0.06625 | 90 |
| 1 | 0.1 | 0.00680 | 0.06710 | 82 |
| 2 | 0.1 | 0.00676 | 0.07961 | 64 |
| 0.5 | 0.01 | 0.00542 | 0.07365 | 66 |
| 1 | 0.01 | 0.00507 | 0.08360 | 71 |
| 2 | 0.01 | 0.00395 | 0.06831 | 85 |
| 0.5 | 0.02 | 0.00288 | 0.07664 | 73 |
| 2 | 0.02 | 0.00288 | 0.07181 | 66 |
| 0.5 | 0.03 | 0.00327 | 0.08289 | 88 |
| 1 | 0.03 | 0.00574 | 0.09207 | 73 |
| 2 | 0.03 | 0.00665 | 0.09825 | 89 |
| Mean | 0.004629 | 0.07615 | 76.42 | |
| 0.001803 | 0.01269 | 9.72 |
Bold values signify the chosen parameters.
Figure 4Plot of the training and validation MSE for early stopping of classifiers: (A) MSE training and validation of ANN. (B) MSE training of BNN. (C) MSE training of DBN in hidden layer 1 (Generative mode). (D) MSE training of sparse-DBN in hidden layer 1 (Generative mode). (E) MSE training and validation of DBN in hidden layer 2 (Discriminative mode). (F) MSE training and validation of DBN in hidden layer 2 (Discriminative mode).
The best MSE and iteration numbers from the training of the classifiers (ANN, BNN, DBN, and Sparse-DBN).
| ANN | 0.115 | 110 |
| BNN | 0.0979 | 77 |
| DBN | 0.0649 | 68 |
| Sparse-DBN | 0.0520 | 69 |
Figure 5Plot of the optimal number hidden nodes and layers.
Results classification fatigue state vs. alert state for the test set on different feature extractors and classifiers—early stopping approach.
| PSD | TP | 782 | 808 | 873 | 919 |
| FN | 264 | 238 | 173 | 127 | |
| TN | 731 | 791 | 833 | 855 | |
| FP | 315 | 255 | 213 | 191 | |
| Sensitivity (%) | 74.8 | 77.2 | 83.5 | 87.9 | |
| Specificity (%) | 69.9 | 75.6 | 79.6 | 81.7 | |
| Accuracy (%) | 72.3 | 76.4 | 81.5 | 84.8 | |
| AR | TP | 845 | 882 | 950 | 982 |
| FN | 201 | 164 | 96 | 64 | |
| TN | 814 | 868 | 946 | 965 | |
| FP | 232 | 178 | 100 | 81 | |
| Sensitivity (%) | 80.8 | 84.3 | 90.8 | ||
| Specificity (%) | 77.8 | 83.0 | 90.4 | ||
| Accuracy (%) | 79.3 | 83.6 | 90.6 | ||
Bold values signify improved classification results using proposed method.
Results of classification accuracy fatigue state vs. alert state with chosen AR feature extractors and different classifiers—.
| TP | 852.0 ± 10.583 | 888.0 ± 13.229 | 951.3 ± 4.933 | 992 ± 11.930 |
| FN | 194.7 ± 10.408 | 158.7 ± 13.051 | 95.3 ± 4.726 | 54.3 ± 11.719 |
| TN | 820.3 ± 13.051 | 874.7 ± 15.308 | 947.0 ± 5.292 | 976.0 ± 12.288 |
| FP | 225.7 ± 13.051 | 171.3 ± 15.308 | 99.0 ± 5.292 | 70.0 ± 12.288 |
| Sensitivity | 81.4% ± 0.010 | 84.8% ± 0.012 | 90.9% ± 0.005 | |
| Specificity | 78.4% ± 0.012 | 83.6% ± 0.015 | 90.5% ± 0.005 | |
| Accuracy | 79.9% ± 0.011 | 84.2% ± 0.014 | 90.7% ± 0.005 | |
Bold values signify improved classification results using proposed method.
Result of Statistical significance of Tukey–Kramer HSD in pairwise comparison.
| Sparse DBN vs. DBN | 5.376 | 0.021 | |
| Sparse DBN vs. BNN | 15.795 | 0.001 | |
| Sparse DBN vs. ANN | 22.733 | 0.001 | |
| DBN vs. BNN | 10.419 | 0.001 | |
| DBN vs. ANN | 17.357 | 0.001 | |
| BNN vs. ANN | 6.938 | 0.005 |
Bold values signify statistical significance of proposed method vs. other methods.
p < 0.05 statistically significant.
p < 0.01 statistically highly significant.
Figure 6ROC plot with AUC values for AR feature extractor and ANN, BNN, DBN, and sparse-DBN classifiers of early stopping (hold-out cross-validation) technique.
Figure 7ROC plot with AUC values for AR feature extractor and ANN, BNN, DBN, and sparse-DBN classifiers of ROC plot with AUC value for 1st fold. (B) ROC plot with AUC value for 2nd fold. (C) ROC plot with AUC value for 3rd fold.
Comparison of the training time and testing time for different classifiers.
| ANN | 24.02 ± 1.04 | 0.0371 ± 0.0023 |
| BNN | 55.82 ± 2.77 | 0.0381 ± 0.0082 |
| DBN | 86.79 ± 0.24 | 0.0334 ± 0.0016 |
| Sparse-DBN | 169.23 ± 0.93 | 0.0385 ± 0.0043 |