| Literature DB >> 35250374 |
Imène Neggaz1, Hadria Fizazi1.
Abstract
Human facial analysis (HFA) has recently become an attractive topic for computer vision research due to technological progress and mobile applications. HFA explores several issues as gender recognition (GR), facial expression, age, and race recognition for automatically understanding social life. This study explores HFA from the angle of recognizing a person's gender from their face. Several hard challenges are provoked, such as illumination, occlusion, facial emotions, quality, and angle of capture by cameras, making gender recognition more difficult for machines. The Archimedes optimization algorithm (AOA) was recently designed as a metaheuristic-based population optimization method, inspired by the Archimedes theory's physical notion. Compared to other swarm algorithms in the realm of optimization, this method promotes a good balance between exploration and exploitation. The convergence area is increased By incorporating extra data into the solution, such as volume and density. Because of the preceding benefits of AOA and the fact that it has not been used to choose the best area of the face, we propose utilizing a wrapper feature selection technique, which is a real motivation in the field of computer vision and machine learning. The paper's primary purpose is to automatically determine the optimal face area using AOA to recognize the gender of a human person categorized by two classes (Men and women). In this paper, the facial image is divided into several subregions (blocks), where each area provides a vector of characteristics using one method from handcrafted techniques as the local binary pattern (LBP), histogram-oriented gradient (HOG), or gray-level co-occurrence matrix (GLCM). Two experiments assess the proposed method (AOA): The first employs two benchmarking datasets: the Georgia Tech Face dataset (GT) and the Brazilian FEI dataset. The second experiment represents a more challenging large dataset that uses Gallagher's uncontrolled dataset. The experimental results show the good performance of AOA compared to other recent and competitive optimizers for all datasets. In terms of accuracy, the AOA-based LBP outperforms the state-of-the-art deep convolutional neural network (CNN) with 96.08% for the Gallagher's dataset.Entities:
Keywords: Archimedes optimization algorithm (AOA); Automatic selection; Handcrafted methods; Human facial analysis (HFA); Wrapper feature selection (FS)
Year: 2022 PMID: 35250374 PMCID: PMC8889074 DOI: 10.1007/s00500-022-06886-3
Source DB: PubMed Journal: Soft comput ISSN: 1432-7643 Impact factor: 3.732
Fig. 1Basic LBP operator
Fig. 2The coding face using a set of LBP histogram
Fig. 3An example of GLCMs based on different orientations
Fig. 4Encoding solution
Fig. 5The architecture of MLP
Fig. 6The design framework of AOA-based FS for gender recognition
Confusion matrix
| Predicted | ||
|---|---|---|
| Actual | Male | Female |
| Male | TrP | FaN |
| Female | FaP | TrN |
Parameters settings of physical, mathematical and swarm inspired algorithms
| Algorithms | Parameters setting |
|---|---|
| Common settings | Population size ( |
| Maximum number of iterations ( | |
| Maximal limit=1 | |
| Minimal limit=0 | |
| Dimension corresponds to the number of blocks | |
| AOA (Hashim et al. | |
| EO (Faramarzi et al. | |
| Generation Probability | |
| MVO (Mirjalili et al. | Wormehole Existance Prob |
| Traveling Distance Rate ( | |
| HGSO (Hashim et al. | Clusters number=2 |
| EPO (Dhiman and Kumar | Temperature Profile |
| Function | |
| SCA (Mirjalili | a |
| MRFO (Zhao et al. | |
| HHO (Heidari et al. |
The impact of features descriptors on the performance of AOA against other recent optimizers over fitness measures
| Fitness | GT dataset | FEI dataset | ||||
|---|---|---|---|---|---|---|
| Algorithms | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO | 0.8947 | 0.8644 | 0.8913 | 0.9776 | 0.9262 | 0.9849 |
| SCA | 0.8914 | 0.8653 | 0.8873 | 0.9820 | 0.9518 | 0.9841 |
| EO | 0.9002 | 0.8642 | 0.8927 | 0.9853 | 0.9461 | 0.9829 |
| EPO | 0.8966 | 0.8658 | 0.8887 | 0.9804 | 0.9351 | 0.9780 |
| MRFO | 0.8950 | 0.8593 | 0.8902 | 0.9837 | 0.9412 | 0.9853 |
| HGSO | 0.8902 | 0.8543 | 0.8833 | 0.9837 | 0.9298 | 0.9834 |
| MVO | 0.8981 | 0.8589 | 0.8954 | 0.9805 | 0.9335 | 0.9825 |
| AOA | ||||||
The impact of features descriptors on the performance of AOA against other recent optimizers over Cpu Time measures
| CPU time | GT dataset | FEI dataset | ||||
|---|---|---|---|---|---|---|
| Algorithms | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO | 446.6000 | 466.0300 | 363.8200 | 249.3200 | 295.8900 | 258.3600 |
| SCA | 301.8600 | 211.4300 | 264.7200 | |||
| EO | 324.7700 | 221.4100 | 250.8200 | 214.4500 | 295.8000 | |
| EPO | 272.2800 | 324.0800 | 248.1200 | 126.0400 | 211.3700 | |
| MRFO | 301.8600 | 445.4700 | 468.4100 | 302.8600 | 226.5200 | 218.7900 |
| HGSO | 454.8300 | 398.6100 | 312.1300 | 282.4900 | 199.3700 | 229.3200 |
| MVO | 368.0900 | 430.3200 | 369.4800 | 375.0500 | 272.1800 | 229.4900 |
| AOA | 391.3700 | 332.3200 | 388.8600 | 197.0700 | 231.6700 | 340.9200 |
The impact of features descriptors on the performance of AOA against other recent optimizers over Accuracy measures
| Accuracy | GT dataset | FEI dataset | ||||
|---|---|---|---|---|---|---|
| Algorithms | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO | 0.8990 | 0.8689 | 0.8960 | 0.9831 | 0.9320 | 0.9887 |
| SCA | 0.8914 | 0.8660 | 0.8887 | 0.9841 | 0.9534 | 0.9864 |
| EO | 0.9031 | 0.8651 | 0.8953 | 0.9889 | 0.9506 | 0.9864 |
| EPO | 0.8993 | 0.8669 | 0.8917 | 0.9840 | 0.9370 | 0.9815 |
| MRFO | 0.9028 | 0.8636 | 0.8944 | 0.9893 | 0.9456 | 0.9889 |
| HGSO | 0.8944 | 0.8578 | 0.8873 | 0.9881 | 0.9349 | 0.9886 |
| MVO | 0.9020 | 0.8624 | 0.8997 | 0.9865 | 0.9392 | 0.9875 |
| AOA | ||||||
The impact of features descriptors on the performance of AOA against other recent optimizers over Selection ratio measures
| Selection ratio | GT dataset | FEI dataset | ||||
|---|---|---|---|---|---|---|
| Algorithms | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO | 0.5306 | 0.5714 | 0.5714 | 0.5714 | 0.6122 | 0.3878 |
| SCA | 0.3673 | |||||
| EO | 0.3878 | 0.5306 | 0.3673 | 0.3673 | 0.4898 | 0.3673 |
| EPO | 0.3673 | 0.2449 | 0.4082 | 0.3673 | 0.2449 | 0.3673 |
| MRFO | 0.8776 | 0.5714 | 0.5306 | 0.5714 | 0.4898 | 0.3673 |
| HGSO | 0.5306 | 0.4898 | 0.5102 | 0.4490 | 0.5714 | 0.5306 |
| MVO | 0.4898 | 0.4898 | 0.5306 | 0.6122 | 0.6327 | 0.5102 |
| AOA | 0.2449 | 0.5510 | 0.3469 | 0.6327 | 0.4694 | |
The impact of features descriptors on the performance of AOA against other recent optimizers over Recall measures
| Recall | GT dataset | FEI dataset | ||||
|---|---|---|---|---|---|---|
| Algorithms | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO | 0.9040 | 0.8747 | 0.9000 | 0.9875 | 0.9425 | 0.9900 |
| SCA | 0.8960 | 0.8693 | 0.8907 | 0.9850 | 0.9900 | |
| EO | 0.9067 | 0.8693 | 0.8987 | 0.9925 | 0.9475 | 0.9900 |
| EPO | 0.9040 | 0.8960 | 0.9875 | 0.9400 | 0.9825 | |
| MRFO | 0.8707 | 0.8973 | 0.9500 | 0.9925 | ||
| HGSO | 0.9000 | 0.8613 | 0.8880 | 0.9925 | 0.9400 | |
| MVO | 0.9080 | 0.8693 | 0.9040 | 0.9925 | 0.9450 | 0.9925 |
| AOA | 0.9040 | 0.8733 | 0.9475 | |||
The impact of features descriptors on the performance of AOA against other recent optimizers over Precision measures
| Precision | GT dataset | FEI dataset | ||||
|---|---|---|---|---|---|---|
| Algorithms | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO | 0.9014 | 0.8949 | 0.9903 | 0.9347 | 0.9951 | |
| SCA | 0.8891 | 0.8400 | 0.8870 | 0.9880 | 0.9560 | 0.9877 |
| EO | 0.9018 | 0.8414 | 0.8927 | 0.9928 | 0.9904 | |
| EPO | 0.8960 | 0.8475 | 0.8883 | 0.9882 | 0.9383 | 0.9881 |
| MRFO | 0.9052 | 0.8352 | 0.8913 | 0.9510 | 0.9929 | |
| HGSO | 0.8908 | 0.8361 | 0.8874 | 0.9928 | 0.9417 | 0.9927 |
| MVO | 0.8974 | 0.8399 | 0.8977 | 0.9927 | 0.9478 | 0.9927 |
| AOA | 0.8529 | 0.9457 | ||||
The impact of features descriptors on the performance of AOA against other recent optimizers over F-score measures
| F-score | GT dataset | FEI dataset | ||||
|---|---|---|---|---|---|---|
| Algorithms | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO | 0.8977 | 0.8427 | 0.8953 | 0.9887 | 0.9379 | 0.9924 |
| SCA | 0.8892 | 0.8428 | 0.8866 | 0.9862 | 0.9888 | |
| EO | 0.9026 | 0.8400 | 0.8922 | 0.9925 | 0.9547 | 0.9900 |
| EPO | 0.8981 | 0.8898 | 0.9875 | 0.9388 | 0.9848 | |
| MRFO | 0.8328 | 0.8896 | 0.9499 | 0.9925 | ||
| HGSO | 0.8918 | 0.8359 | 0.8854 | 0.9925 | 0.9397 | 0.9938 |
| MVO | 0.9004 | 0.8351 | 0.8967 | 0.9925 | 0.9449 | 0.9925 |
| AOA | 0.9050 | 0.8422 | 0.9453 | |||
The performance results of AOA based on HOG descriptor against other recent optimizers for Gallagher’s dataset
| Gallagher’s dataset – HOG Descriptor | |||||||
|---|---|---|---|---|---|---|---|
| Algorithms | Fitness | Accuracy | Recall | Precision | Fscore | Selection R | CPU tiime |
| HHO | 0.9336 | 0.9405 | 0.9476 | 0.9477 | 0.9476 | 0.5510 | 18317.6000 |
| SCA | 0.9125 | 0.9137 | 0.9165 | 0.9143 | 0.9152 | 0.4490 | 19271.4500 |
| EO | 0.9387 | 0.9405 | 0.9416 | 0.9426 | 0.9420 | 0.4898 | 8490.3050 |
| EPO | 0.9100 | 0.912 | 0.9130 | 0.9144 | 0.9136 | 0.5306 | 16416.4600 |
| MRFO | 0.9451 | 0.9501 | 0.9549 | 0.9553 | 0.9550 | 0.5306 | 9257.9900 |
| HGSO | 0.9167 | 0.9183 | 0.9212 | 0.9198 | 0.9204 | 0.4898 | 1449.6810 |
| MVO | 0.9169 | 0.9196 | 0.9202 | 0.9230 | 0.9213 | 0.5918 | 1319.9640 |
| AOA | |||||||
The performance results of AOA based on LBP descriptor against other recent optimizers for Gallagher’s dataset
| Gallagher’s dataset – LBP Descriptor | |||||||
|---|---|---|---|---|---|---|---|
| Algorithms | Fitness | Accuracy | Recall | Precision | Fscore | Selection R | CPU tiime |
| HHO | 0.9364 | 0.9419 | 0.9475 | 0.9482 | 0.9475 | 0.6122 | 1564.6520 |
| SCA | 0.9506 | 0.9528 | 0.9550 | 0.9578 | 0.9550 | 0.2653 | 745.7800 |
| EO | 0.8754 | 0.8742 | 0.8693 | 0.8744 | 0.8708 | 0.3061 | 6460.6420 |
| EPO | 0.9461 | 0.9481 | 0.9500 | 0.9503 | 0.9500 | 0.3878 | 615.9800 |
| MRFO | 0.9383 | 0.9422 | 0.9450 | 0.9504 | 0.9463 | 0.4490 | 615.8400 |
| HGSO | 0.9192 | 0.9233 | 0.9275 | 0.9287 | 0.9276 | 0.4898 | 6460.6400 |
| MVO | 0.9310 | 0.9342 | 0.9375 | 0.9383 | 0.9375 | 0.3878 | 1449.0000 |
| AOA | |||||||
The performance results of AOA based on GLCM descriptor against other recent optimizers for Gallagher’s dataset
| Gallagher’s Dataset – GLCM Descriptor | |||||||
|---|---|---|---|---|---|---|---|
| Algorithms | Fitness | Accuracy | Recall | Precision | Fscore | Selection R | CPU tiime |
| HHO | 0.9425 | 0.9475 | 0.9525 | 0.9533 | 0.9524 | 0.5510 | 1861.5200 |
| SCA | 0.9535 | 1106.1770 | |||||
| EO | 0.9470 | 0.9501 | 0.9520 | 0.9530 | 0.9520 | 0.4490 | 1566.2000 |
| EPO | 0.9408 | 0.9441 | 0.9475 | 0.9486 | 0.9475 | 0.3878 | 1600.2400 |
| MRFO | 0.9445 | 0.9457 | 0.9495 | 0.9496 | 0.9495 | 0.5714 | 1560.8800 |
| HGSO | 0.9417 | 0.9452 | 0.9500 | 0.9483 | 0.9488 | 0.4082 | 453.2250 |
| MVO | 0.9437 | 0.9493 | 0.9524 | 0.9534 | 0.6122 | 2045.6700 | |
| AOA | 0.9482 | 0.9503 | 0.2653 | ||||
Fig. 7Convergence curve of AOA versus other swarm intelligence algorithms over smallest and largest datasets
Fig. 8ROC of AOA versus other swarm intelligence algorithms over smallest and largest datasets
Fig. 9Visual examples of selected patches from Gallagher’s dataset using AOA
Comparative performance in terms of accuracy with the existing methods– GT dataset
| References | Classifier | Extracted features | Accuracy |
|---|---|---|---|
| Goel and Vishwakarma ( | SVM (2-folds) | DCT | 98.96% |
| Goel and Vishwakarma ( | SVM (2-folds) | KPCA | 97.38% |
| Goel and Vishwakarma ( | SVM (2-folds) | DWT+DCT | 99% |
| Proposed method | AOA-BPNN (2-folds) | Multi-blocks HOG |
|
| Multi-blocks LBP | 97.25% | ||
| Multi-blocks GLCM | 99.15% |
Statistical study using Wilcoxon’s test ( In bold best values , which implies that AOA is substantial against algorithm X)
| AOA | GT dataset | FEI dataset | Gallagher’s datase | ||||||
|---|---|---|---|---|---|---|---|---|---|
| versus | HOG | LBP | GLCM | HOG | LBP | GLCM | HOG | LBP | GLCM |
| HHO |
|
|
|
|
|
|
|
| 5.12E-01 |
| SCA |
|
|
|
|
|
|
|
|
|
| EO |
|
|
|
| 2.64E-01 |
|
|
|
|
| EPO |
| 8.09E-01 |
|
|
|
|
|
|
|
| MRFO |
|
|
|
|
|
|
|
| 1.17E-01 |
| HGSO |
|
|
|
|
|
|
|
|
|
| MVO | 9.91E-02 |
|
|
|
|
|
|
|
|
Comparative performance in terms of accuracy with the existing approaches–FEI dataset
| References | Classifier | Extracted features | Accuracy |
|---|---|---|---|
| Micheal and Geetha ( | SVM | DRLBP++RILPQ+PHOG | 95.30% |
| Geetha et al. ( | SVM | 8-LDP+LBP | 99% |
| Ghojogh et al. ( | LDA+weighting vote | Intensity of lower part of face | 94% |
|
Haider et al. ( | Deepgender | * | 98.75% |
| Zhou and Li ( | GA-BPNN | Eigen-features based on PCA | 96% |
| Khan et al. ( | MSFS-CRFs | Segmentation based on Super-Pixels | 93.70% |
| Kumar et al. ( | SVM | Multi-features (BoW+SIFT) | 98% |
| Proposed method | AOA-BPNN | Multi-blocks HOG |
|
| Multi-blocks LBP | 95.61% | ||
| Multi-blocks GLCM | 99.04% |
Comparative performance in terms of accuracy with the existing approaches–Gallagher’s dataset under Dago’s protocol
| References | Classifier | Extracted features | Accuracy |
|---|---|---|---|
| Dago-Casas et al. ( | SVM | Gabor+PCA | 86.01% |
| Castrillón-Santana et al. ( | Bagging | LBP+HOG | 88.01% |
| Castrillón-Santana et al. ( | SVM | HOG+ | 82.91% |
| Mansanet et al. ( | Local DNN | * | 90.58% |
| Orozco et al. ( | Ubunsa CNN | * | 91.48% |
| Abdalrady and Aly ( | 2-stage PCANet | * | 89.65% |
| Proposed method | AOA-BPNN | Multi-blocks HOG | 95.51% |
| Multi-blocks LBP |
| ||
| Multi-blocks GLCM | 95.03% |
Fig. 10The performance of AOA against MLP