| Literature DB >> 35270777 |
Suparshya Babu Sukhavasi1, Susrutha Babu Sukhavasi1, Khaled Elleithy1, Ahmed El-Sayed1, Abdelrahman Elleithy2.
Abstract
Machine and deep learning techniques are two branches of artificial intelligence that have proven very efficient in solving advanced human problems. The automotive industry is currently using this technology to support drivers with advanced driver assistance systems. These systems can assist various functions for proper driving and estimate drivers' capability of stable driving behavior and road safety. Many studies have proved that the driver's emotions are the significant factors that manage the driver's behavior, leading to severe vehicle collisions. Therefore, continuous monitoring of drivers' emotions can help predict their behavior to avoid accidents. A novel hybrid network architecture using a deep neural network and support vector machine has been developed to predict between six and seven driver's emotions in different poses, occlusions, and illumination conditions to achieve this goal. To determine the emotions, a fusion of Gabor and LBP features has been utilized to find the features and been classified using a support vector machine classifier combined with a convolutional neural network. Our proposed model achieved better performance accuracy of 84.41%, 95.05%, 98.57%, and 98.64% for FER 2013, CK+, KDEF, and KMU-FED datasets, respectively.Entities:
Keywords: ADAS (advanced driver assistance systems); convolutional neural network; driver emotion detection; facial expression recognition; hybrid model; machine learning; support vector machine
Mesh:
Year: 2022 PMID: 35270777 PMCID: PMC8909976 DOI: 10.3390/ijerph19053085
Source DB: PubMed Journal: Int J Environ Res Public Health ISSN: 1660-4601 Impact factor: 3.390
Figure 1Hybrid Network.
Figure 2Face and face components detection using Haar features cascade classifier.
Figure 3(a) FER 2013 Dataset. (b) CK+ Dataset. (c) KDEF Dataset. (d) KMU-FED Dataset.
Parameter settings used to train our CNN model on all four databases.
| Databases | Parameters | Settings |
|---|---|---|
| Image Size | 256 × 256 | |
| Optimizer | Stochastic Gradient Descent (S.G.D.) | |
| CK+, | Loss Function | Cross Entropy |
| FER 2013, | Activation Function | ReLU |
| KDEF, | Batch Size | 128 |
| KMUFED | Learning rate | 0.001 |
| Epochs | 100 | |
| Momentum | 0.9 | |
| Validation Frequency | 30 |
Parameter Settings used to train our SVM model on all four databases.
| Databases | Parameters | Settings |
|---|---|---|
| Objective Function | Hinge loss | |
| CK+, | Solver | SGD |
| FER 2013, | Kernel | Linear |
| KDEF, KMUFED | Type | One-vs-one |
Comparison of proposed approaches with the state-of-the-art methods on FER 2013 Database.
| Comparison Methods | Accuracy (%) |
|---|---|
| CNN-MNF [ | 70.3 |
| CNN-BOVW-SVM [ | 75.4 |
| KCNN-SVM [ | 80.3 |
| VCNN [ | 65.7 |
| EXNET [ | 73.5 |
| Deep-Emotion [ | 70.0 |
| IRCNN-SVM [ | 68.1 |
| GLFCNN+SVM (Our Proposed Approach) | 84.4 |
Comparison of proposed approaches with the state-of-the-art methods on CK+ Database.
| Comparison Methods | Accuracy (%) |
|---|---|
| Inception-Resnet and LSTM [ | 93.2 |
| DCMA-CNN [ | 93.4 |
| WRF [ | 92.6 |
| LMRF [ | 93.4 |
| VGG11+SVM [ | 92.2 |
| DNN+RELM [ | 86.5 |
| LBP+ORB+SVM [ | 93.2 |
| MDNETWORK [ | 96.2 |
| GLFCNN+SVM (Our Proposed Approach) | 95.1 |
Comparison of proposed approaches with the state-of-the-art methods on KDEF Database.
| Comparison Methods | Accuracy (%) |
|---|---|
| MULTICNN [ | 89.5 |
| HDNN [ | 96.2 |
| RTCNN [ | 88.1 |
| ALEXNET+LDA [ | 88.6 |
| MSLBP+SVM [ | 89.0 |
| DL-FER [ | 96.6 |
| RBFNN [ | 88.8 |
| GLFCNN+SVM (Our Proposed Approach) | 98.5 |
Comparison of proposed approaches with the state-of-the-art methods on KMU FED Database.
| Comparison Methods | Accuracy (%) |
|---|---|
| WRF [ | 94.0 |
| FTDRF [ | 93.6 |
| d-RFs [ | 91.2 |
| SqueezeNet [ | 89.7 |
| MobileNetV3 [ | 94.9 |
| LMRF [ | 95.1 |
| CCNN [ | 97.3 |
| VGG16 [ | 94.2 |
| GLFCNN+SVM (Our Proposed Approach) | 98.6 |
Figure 4Confusion matrices of the proposed method with accuracy (%) using (a) F.E.R. 2013 dataset, (b) CK+ dataset, (c) KDEF dataset, and (d) KMU-FED dataset.