| Literature DB >> 35755731 |
Irfan Ali Kandhro1, Mueen Uddin2, Saddam Hussain3, Touseef Javed Chaudhery4, Mohammad Shorfuzzaman5, Hossam Meshref5, Maha Albalhaq6, Raed Alsaqour7, Osamah Ibrahim Khalaf8.
Abstract
When it comes to conveying sentiments and thoughts, facial expressions are quite effective. For human-computer collaboration, data-driven animation, and communication between humans and robots to be successful, the capacity to recognize emotional states in facial expressions must be developed and implemented. Recently published studies have found that deep learning is becoming increasingly popular in the field of image categorization. As a result, to resolve the problem of facial expression recognition (FER) using convolutional neural networks (CNN), increasingly substantial efforts have been made in recent years. Facial expressions may be acquired from databases like CK+ and JAFFE using this novel FER technique based on activations, optimizations, and regularization parameters. The model recognized emotions such as happiness, sadness, surprise, fear, anger, disgust, and neutrality. The performance of the model was evaluated using a variety of methodologies, including activation, optimization, and regularization, as well as other hyperparameters, as detailed in this study. In experiments, the FER technique may be used to recognize emotions with an Adam, Softmax, and Dropout Ratio of 0.1 to 0.2 when combined with other techniques. It also outperforms current FER techniques that rely on handcrafted features and only one channel, as well as has superior network performance compared to the present state-of-the-art techniques.Entities:
Mesh:
Year: 2022 PMID: 35755731 PMCID: PMC9225833 DOI: 10.1155/2022/3098604
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Proposed system architecture.
CNN configuration.
| Layer (type) | Output shape | Param # |
|---|---|---|
| conv2d (Conv2D) | (None,128,128,6) | 456 |
| max_pooling2d (MaxPooling2D) | (None, 64,64,6) | 0 |
| conv2d_1 (Conv2D) | (None, 64,64,16) | 2416 |
| Activation (activation) | (None, 64,64,16) | 0 |
| max_pooling2d_1 (MaxPooling2) | (None, 32,32,16) | 0 |
| conv2d_2 (Conv2D) | (None, 30,30,64) | 9280 |
| max_pooling2d_2 (MaxPooling2) | (None, 15,15,64) | 0 |
| Flatten (flatten) | (None, 14400) | 0 |
| Dense (dense) | (None, 128) | 1843328 |
| Dropout (dropout) | (None, 128) | 0 |
| dense_1 (dense) | (None, 7) | 903 |
CNN results with CK+ and JAFEE.
| Dataset | Epochs | Batch size | Test CNN Acc (%) | Test CNN loss |
|---|---|---|---|---|
| CK+ | 15 | 8 | 93 | 0.07 |
| 20 | 16 | 94 | 0.06 | |
| 25 | 32 | 95 | 0.05 | |
| 30 | 64 | 94 | 0.06 | |
| 35 | 128 | 93 | 0.07 | |
| 40 | 256 | 96 | 0.07 | |
| 45 | 512 | 80 | 0.20 | |
| 50 | 1024 | 97 | 0.03 | |
| JAFFE | 15 | 8 | 45 | 0.55 |
| 20 | 16 | 48 | 0.52 | |
| 25 | 32 | 60 | 0.40 | |
| 30 | 64 | 49 | 0.51 | |
| 35 | 128 | 65 | 0.35 | |
| 40 | 256 | 42 | 0.58 | |
| 45 | 512 | 60 | 0.40 | |
| 50 | 1024 | 52 | 0.48 |
Hyperparameters testing accuracy on the JAFEE dataset.
| Activation | Optimization | Dropout | Testing accuracy |
|---|---|---|---|
| Softmax | Adam | 0.1 | 0.94% |
| Softmax | AdaGrad | 0.1 | 0.91% |
| Softmax | Nadam | 0.1 | 0.92% |
| Softplus | Adamax | 0.1 | 0.91% |
| Sigmoid | Adam | 0.1 | 0.90% |
| Relu | Adam | 0.1 | 0.93% |
| Relu | AdaGrad | 0.1 | 0.91% |
| Relu | Nadam | 0.1 | 0.90% |
| Relu | Adamax | 0.1 | 0.90% |
| Softmax | Adam | 0.2 | 0.95% |
| ……… | ……… | ………. | …….. |
| hard_sigmoid | Adamax | 0.4 | 0.88% |
Figure 2CNN model CK+'s accuracy.
Figure 3Loss of CNN model CK+.
Figure 4Accuracy of the CNN model with CK + dataset.
Figure 5Loss of the CNN model with JAFFE dataset.
Figure 6Predication of model CK+.