| Literature DB >> 23365559 |
Arindam Kar1, Debotosh Bhattacharjee, Dipak Kumar Basu, Mita Nasipuri, Mahantapas Kundu.
Abstract
In this paper a nonlinear Gabor Wavelet Transform (GWT) discriminant feature extraction approach for enhanced face recognition is proposed. Firstly, the low-energized blocks from Gabor wavelet transformed images are extracted. Secondly, the nonlinear discriminating features are analyzed and extracted from the selected low-energized blocks by the generalized Kernel Discriminative Common Vector (KDCV) method. The KDCV method is extended to include cosine kernel function in the discriminating method. The KDCV with the cosine kernels is then applied on the extracted low-energized discriminating feature vectors to obtain the real component of a complex quantity for face recognition. In order to derive positive kernel discriminative vectors, we apply only those kernel discriminative eigenvectors that are associated with nonzero eigenvalues. The feasibility of the low-energized Gabor-block-based generalized KDCV method with cosine kernel function models has been successfully tested for classification using the L(1), L(2) distance measures; and the cosine similarity measure on both frontal and pose-angled face recognition. Experimental results on the FRAV2D and the FERET database demonstrate the effectiveness of this new approach.Entities:
Mesh:
Year: 2012 PMID: 23365559 PMCID: PMC3529878 DOI: 10.1155/2012/421032
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Demonstration images of an individual from the FRAV2D database.
Figure 3Demonstration images of one individual from the FERET database.
Figure 2Demonstration images of one class from the FRAV2D database.
Figure 4Demonstration images of one class from the FERET dataset.
Recognition results of different algorithms on the FERET probe sets.
| Method | FERET probe sets | |||
|---|---|---|---|---|
| Fb | Fc | Dup I | Dup II | |
| Phillips et al. [ | 96 | 82 | 59 | 52 |
| Local Gabor binary pattern histogram sequence, [ | 98 | 97 | 74 | 71 |
| Grassmann registration manifolds for face recognition, [ | 98 | 98 | 80 | 84 |
| Low-energized Gabor-block-based KDCV with RBF kernels (Gauss) using cosine measure. | 96 | 97 | 79 | 81 |
| Low-energized Gabor-block-based KDCV with cosine kernels using | 95 | 96 | 79 | 80 |
| Low-energized Gabor-block-based KDCV with cosine kernels using | 96 | 97 | 86 | 82 |
| Low-energized Gabor-block-based KDCV with cosine kernels using cosine measure. |
|
|
|
|
Average recognition results using FRAV2D database.
| Recognition accuracy (%) | |||
|---|---|---|---|
| Method | No. of training samples | Average recognition rates (%) | |
| 3 | 4 | ||
| GWT | 85.5 | 89.5 | 87.5 |
| KDCV | 79.5 | 82 | 80.75 |
| GWT-LDA | 88.3 | 90.33 | 89.33 |
| GWT-KDCV (RBF) | 87 | 90 | 88.5 |
| GWT-KDCV (Cosine) | 88 | 90 | 89 |
| GWT-KDCV (RBF, low-energized) | 90 | 92.5 | 91.25 |
| GWT-KDCV (Cosine, low-energized) |
|
|
|
Specificity and sensitivity measure of the FRAV2D dataset.
| Total no. of classes = 100, total no. of images = 1800 | |||
|---|---|---|---|
| Individual belonging to a particular class | |||
| Using first 3 images of an individual as training images | |||
| Positive | Negative | ||
|
| |||
| FRAV2D test | Positive |
|
|
| Negative |
|
| |
| Sensitivity = | Specificity = | ||
|
| |||
| Using first 4 images of an individual as training images | |||
| Positive | Negative | ||
|
| |||
| FRAV2D test | Positive |
|
|
| Negative |
|
| |
| Sensitivity = | Specificity = | ||
So considering the first 4 images in Figures 2(a)–2(d) of a particular individual for training the achieved rates are as follows.
False positive rate = F /(F + T ) = 1 − Specificity = .3%.
False negative rate = F /(T + F ) = 1 – Sensitivity = 3.15%.
Accuracy = (T + T )/(T + T + F + F ) ≈ 98.3%.
So considering the first 3 images in Figures 2(a)–2(c) of a particular individual for training the achieved rates are as follows.
False positive rate = F /(F + T ) = 1 − Specificity = 1%.
False negative rate = F /(T + F ) = 1 – Sensitivity = 5.125%.
Accuracy = (T + T )/(T + T + F + F ) ≈ 96.9%.
Average recognition results using FERET database.
| Recognition rates (%) | |||
|---|---|---|---|
| Method | No. of training samples | Average recognition rates (%) | |
| 3 | 4 | ||
| GWT | 79.65 | 82.5 | 81.1 |
| KDCV | 69.5 | 75 | 72.25 |
| GWT-LDA | 82.76 | 85.33 | 84.1 |
| GWT-KDCV (RBF) | 81 | 82.50 | 81.75 |
| GWT-KDCV (Cosine) | 83 | 85 | 84.5 |
| GWT-KDCV (RBF, low-energized) | 88.5 | 91.5 | 90 |
| GWT-KDCV (Cosine, low-energized) |
|
|
|
Specificity and sensitivity measure of the FERET dataset.
| Total no. of classes = 200, Total no. of images = 3600 | |||
|---|---|---|---|
| Individual belonging to a particular class | |||
| Using first 3 images of an individual as training images | |||
| Positive | Negative | ||
|
| |||
| Positive |
|
| |
| FERET test | Negative |
|
|
| Sensitivity = | Specificity = | ||
|
| |||
| Using first 4 images of an individual as training images | |||
| Positive | Negative | ||
|
| |||
| Positive |
|
| |
| FERET test | Negative |
|
|
| Sensitivity = | Specificity = | ||
So considering the first 4 images in Figures 4(a)–4(d) of a particular individual for training the achieved rates are:
False positive rate = F /(F + T ) = 1 − Specificity = .72%.
False negative rate = F /(T + F ) = 1 – Sensitivity = 6%.
Accuracy = (T + T )/(T + T + F + F ) ≈ 96.6%.
So considering in the first 3 images Figures 4(a)–4(c) of a particular individual for training the achieved rates are:
False positive rate = F /(F + T ) = 1 − Specificity = 1.2%.
False negative rate = F /(T + F ) = 1 – Sensitivity = 6.75%.
Accuracy = (T + T )/(T + T + F + F ) ≈ 96.1%.
Comparison of recognition accuracy of various methods with the proposed method.
| Method | Highest recognition accuracy |
|---|---|
| GDA | 78.04% |
| Elastic graph matching (EGM) [ | 80.00% |
| DCT-LDA | 80.87% |
| GWT | 81.1% |
| DCT-GDA | 82.84%, |
| GWT-LDA | 84.1% |
| DCT-KDCV | 85.13% |
| Gabor fusion KDCV | 91.22% |
| Proposed approach |
|
Figure 7Negative recognition performance of the proposed method using cosine similarity distance measure on the FRAV2D and FERET dataset.
Figure 5Face recognition performance of the proposed method using cosine kernel function and considering the first 3 images as training set on the FERET dataset, with the three different similarity measures: cos (cosine similarity measure), L 2 (L 2 distance measure), L 1 (L 1 distance measure).
Figure 6Face recognition performance of the proposed method using cosine kernel function and considering the first 3 images as training set on the FRAVD2D dataset, with the three different similarity measures: cos (cosine similarity measure), L 2 (L 2 distance measure), L 1 (L 1 distance measure).