| Literature DB >> 28894461 |
Daniela Sánchez1, Patricia Melin1, Oscar Castillo1.
Abstract
A grey wolf optimizer for modular neural network (MNN) with a granular approach is proposed. The proposed method performs optimal granulation of data and design of modular neural networks architectures to perform human recognition, and to prove its effectiveness benchmark databases of ear, iris, and face biometric measures are used to perform tests and comparisons against other works. The design of a modular granular neural network (MGNN) consists in finding optimal parameters of its architecture; these parameters are the number of subgranules, percentage of data for the training phase, learning algorithm, goal error, number of hidden layers, and their number of neurons. Nowadays, there is a great variety of approaches and new techniques within the evolutionary computing area, and these approaches and techniques have emerged to help find optimal solutions to problems or models and bioinspired algorithms are part of this area. In this work a grey wolf optimizer is proposed for the design of modular granular neural networks, and the results are compared against a genetic algorithm and a firefly algorithm in order to know which of these techniques provides better results when applied to human recognition.Entities:
Mesh:
Year: 2017 PMID: 28894461 PMCID: PMC5574275 DOI: 10.1155/2017/4180510
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1The general architecture of proposed method.
Figure 2Hierarchy of grey wolf.
Pseudocode 1Pseudocode of the grey wolf optimizer.
Figure 3Structure of each search agent.
Table of parameters.
| HGA [ | FA [ | GWO | |||
|---|---|---|---|---|---|
| Parameter | Value | Parameter | Value | Parameter | Value |
| Individuals ( | 10 | Fireflies | 10 | Search agents ( | 10 |
| Maximum number of generations ( | 30 | Maximum number of iterations ( | 30 | Maximum number of iterations ( | 30 |
Table of values for search space.
| Parameters of MNNs | Minimum | Maximum |
|---|---|---|
| Modules ( | 1 | 10 |
| Percentage of data for training | 20 | 80 |
| Error goal | 0.000001 | 0.001 |
| Learning algorithm | 1 | 3 |
| Hidden layers ( | 1 | 10 |
| Neurons for each hidden layers | 20 | 400 |
Figure 4Diagram of the proposed method.
Figure 5Example of selection of data for training and testing phase.
Figure 6Sample of the Ear Recognition Laboratory database from the University of Science & Technology Beijing (USTB).
Figure 7Sample of the ORL database from the AT&T Laboratories Cambridge.
Figure 8Sample of the FERET database.
Figure 9Sample of the iris database.
Figure 10Sample preprocessing for the databases.
The best 10 results (test #1, ear).
| Trial | Images | Number of hidden layers and number of neurons | Persons per module | Rec. | Error | |
|---|---|---|---|---|---|---|
| Training | Testing | |||||
| 1 | 80% | 20% | 5 (126, 96, 179, 239, 37) | Module #1 (1 to 12) | 100% | 0 |
|
| ||||||
| 2 | 69% | 31% | 5 (222, 238, 113, 27, 75) | Module #1 (1 to 5) | 100% | 0 |
|
| ||||||
| 3 | 66% | 34% | 5 (141, 70, 120, 158, 242) | Module #1 (1 to 34) | 100% | 0 |
|
| ||||||
| 4 | 74% | 26% | 5 (139, 97, 200, 121, 231) | Module #1 (1 to 6) | 100% | 0 |
|
| ||||||
| 5 | 63% | 37% | 5 (136, 183, 149, 193, 161) | Module #1 (1 to 68) | 100% | 0 |
Figure 11Convergence of trial #4.
Figure 12Alpha, beta, and delta behavior of trial #4.
Figure 13Obtained errors of recognition (up to 80%, ear).
Comparison of results (test #1, ear).
| Method | Best | Average | Worst |
|---|---|---|---|
| HGA [ | 100% | 99.70% | 93.50% |
| 0 | 0.00303 | 0.0649 | |
| FA [ | 100% | 99.89% | 98.05% |
| 0 | 0.0011 | 0.0195 | |
| Proposed GWO | 100% | 100% | 100% |
| 0 | 0 | 0 |
Figure 14Average of convergence (test #1, ear).
The best 10 results (test #2, ear).
| Trial | Images | Number of hidden layers and number of neurons | Persons per module | Rec. | Error | |
|---|---|---|---|---|---|---|
| Training | Testing | |||||
| 2 | 43% | 57% | 5 (115, 49, 187, 122, 194) | Module #1 (1 to 9) | 96.75% | 0.0325 |
|
| ||||||
| 4 | 48% | 52% | 4 (98, 136, 165, 141) | Module #1 (1 to 26) | 96.75% | 0.0325 |
|
| ||||||
| 7 | 49% | 51% | 5 (201, 84, 169, 113, 131) | Module #1 (1 to 5) | 96.75% | 0.0325 |
|
| ||||||
| 8 | 39% | 61% | 5 (125, 75, 69, 114, 140) | Module #1 (1 to 11) | 96.75% | 0.0325 |
|
| ||||||
| 14 | 40% | 60% | 5 (58, 26, 159, 123, 106) | Module #1 (1 to 12) | 96.75% | 0.0325 |
Figure 15Convergence of trial #2.
Figure 16Convergence of trial #2.
Figure 17Obtained errors of recognition (up to 50%, ear).
Comparison of results (test #2, ear).
| Method | Best | Average | Worst |
|---|---|---|---|
| HGA [ | 98.05% | 94.82% | 79.65% |
| 0.01948 | 0.0518 | 0.20346 | |
| FA [ | 97.40% | 96.82% | 95.45% |
| 0.0260 | 0.0318 | 0.04545 | |
| Proposed GWO | 96.75% | 96.15% | 95.45% |
| 0.03247 | 0.03853 | 0.04545 |
Figure 18Average of convergence (test #2, ear).
The results for face database (test #1, ORL).
| Trial | Images | Number of hidden layers | Persons | Rec. | Error | |
|---|---|---|---|---|---|---|
| Training | Testing | |||||
| 1 | 80% | 20% | 5 (109, 109, 69, 74, 210) | Module #1 (1 to 4) | 100% | 0 |
|
| ||||||
| 2 | 80% | 20% | 5 (52, 188, 138, 154, 71) | Module #1 (1 to 5) | 100% | 0 |
|
| ||||||
| 3 | 80% | 20% | 5 (158, 67, 80, 49, 124) | Module #1 (1 to 3) | 100% | 0 |
|
| ||||||
| 4 | 80% | 20% | 5 (39, 55, 21, 84, 210) | Module #1 (1 to 7) | 100% | 0 |
|
| ||||||
| 5 | 80% | 20% | 5 (75, 156, 197, 128, 233) | Module #1 (1 to 4) | 100% | 0 |
Figure 19Convergence of trial #5.
Figure 20Convergence of trial #5.
Figure 21Obtained recognition rates (test #1, ORL database, comparison 1).
Comparison of results (test #1, ORL).
| Method | Best | Average | Worst |
|---|---|---|---|
| Mendoza et al. [ | 97.50% | 94.69% | 91.5% |
| Sánchez et al. [ | 100% | 100% | 100% |
| Sánchez et al. [ | 100% | 99.27% | 98.61% |
| Proposed GWO | 100% | 100% | 100% |
The results for face database (test #2, ORL).
| Trial | Images | Number of hidden layers and number of neurons | Persons | Rec. | Error | |
|---|---|---|---|---|---|---|
| Training | Testing | |||||
| 1 | 50% | 50% | 5 (139, 149, 64, 49, 69) | Module #1 (1 to 5) | 99% | 0.0100 |
|
| ||||||
| 2 | 50% | 50% | 5 (141, 99, 172, 88, 81) | Module #1 (1 to 7) | 98.50% | 0.0150 |
|
| ||||||
| 3 | 50% | 50% | 4 (60, 37, 220, 169) | Module #1 (1 to 2) | 98% | 0.0200 |
|
| ||||||
| 4 | 50% | 50% | 5 (52, 173, 68, 176, 133) | Module #1 (1 to 3) | 99% | 0.0100 |
|
| ||||||
| 5 | 50% | 50% | 5 (128, 150, 50, 26, 73) | Module #1 (1 to 2) | 98% | 0.0200 |
Figure 22Convergence of trial #1.
Figure 23Convergence of trial #1.
Figure 24Obtained recognition rates (test #2, ORL database, comparison 2).
Comparison of results (test #2, ORL).
| Method | Best | Average | Worst |
|---|---|---|---|
| Azami et al. [ | 96.50% | 95.91% | 95.37% |
| Ch'Ng et al. [ | 96.5% | 94.75% | 94% |
| Sánchez et al. [ | 99% | 98.30% | 98% |
| Sánchez et al. [ | 98.43% | 97.59% | 94.55% |
| Proposed GWO | 99% | 98.50% | 98% |
The results for iris database.
| Trial | Images | Number of hidden layers and number of neurons | Persons per module | Rec. | Error | |
|---|---|---|---|---|---|---|
| Training | Testing | |||||
| 1 | 79% | 21% | 5 (133, 205, 93, 203, 184) | Module #1 (1 to 15) | 99.57% | 0.0043 |
|
| ||||||
| 2 | 75% | 25% | 5 (97, 66, 149, 117, 144) | Module #1 (1 to 4) | 100% | 0 |
|
| ||||||
| 6 | 76% | 24% | 4 (73, 210, 138, 49) | Module #1 (1 to 3) | 99.57% | 0.0043 |
|
| ||||||
| 7 | 78% | 22% | 5 (168, 99, 94, 156, 175) | Module #1 (1 to 4) | 99.57% | 0.0043 |
|
| ||||||
| 11 | 78% | 22% | 5 (86, 162, 217, 168, 168) | Module #1 (1 to 4) | 100% | 0 |
Figure 25Convergence of trial #2.
Figure 26Convergence of trial #2.
Figure 27Obtained recognition rates (iris database).
Comparison of results (iris).
| Method | Best | Average | Worst |
|---|---|---|---|
| Sánchez and Melin [ | 99.68% | 98.68% | 97.40% |
| 0.0032 | 0.0132 | 0.0260 | |
| Sánchez et al. [ | 99.13% | 98.22% | 96.59% |
| 0.0087 | 0.0178 | 0.0341 | |
| Proposed GWO | 100% | 99.31% | 98.70% |
| 0 | 0.0069 | 0.0130 |
Figure 28Average of convergence (iris).
Figure 29Average of training time.
Figure 30Average of training time.
Databases setup.
| Database | Number of persons | Max. number of images per person | Image size | |
|---|---|---|---|---|
| Training | Testing | |||
| Ear | 77 | 3 | 3 | 132 × 91 |
| ORL | 40 | 9 | 9 | 92 × 112 |
| FERET | 200 | 6 | 6 | 100 × 100 |
| Iris | 77 | 13 | 13 | 21 × 21 |
The summary of results (proposed method).
| Method | Number of images | Recognition rate | ||
|---|---|---|---|---|
| Best | Average | Worst | ||
| Proposed method | 3 | 100% | 100% | 100% |
| Proposed method | 2 | 96.75% | 96.15% | 95.45% |
| Proposed method | 8 | 100% | 100% | 100% |
| Proposed method | 5 | 99% | 98.50% | 98.50% |
| Proposed method | (up to 80%) | 98% | 92.63% | 88.17% |
| Proposed method | (up to 80%) | 100% | 99.31% | 98.70% |
Table of comparison of optimized results (ear database).
| Method | Number of images | Recognition rate | ||
|---|---|---|---|---|
| Best (%) | Average (%) | Worst (%) | ||
| Sánchez and Melin [ | 3 | 100% | 96.75% | — |
| Melin et al. [ | 3 | 100% | 93.82% | 83.11% |
| Sánchez and Melin [ | 3 | 100% | 99.69% | 93.5% |
| Sánchez et al. [ | 3 | 100% | 99.89% | 98.05% |
| Proposed method | 3 | 100% | 100% | 100% |
| Sánchez and Melin [ | 2 | 96.10% | 88.53% | — |
| Sánchez and Melin [ | 2 | 98.05% | 94.81% | 79.65% |
| Sánchez et al. [ | 2 | 97.40% | 96.82% | 95.45% |
| Proposed method | 2 | 96.75% | 96.15% | 95.45% |
Table of cross-validation results (ear database).
| Experiment | Experiment | Experiment | Experiment | Average |
|---|---|---|---|---|
| 100% | 100% | 94.81% | 93.51% | 97.07% |
Table of comparison of optimized results (ORL database).
| Method | Images for training | Recognition rate | ||
|---|---|---|---|---|
| Best (%) | Average (%) | Worst (%) | ||
| Mendoza et al. [ | 8 | 97.50% | 94.69% | 91.50% |
| Sánchez et al. [ | 8 | 100% | 100% | 100% |
| Sánchez et al. [ | 8 | 100% | 99.27% | 98.61% |
| Proposed method | 8 | 100% | 100% | 100% |
| Azami et al. [ | 5 | 96.5% | 95.91% | 95.37% |
| Ch'Ng et al. [ | 5 | 96.5% | 94.75% | 94% |
| Sánchez et al. [ | 5 | 99% | 98.30% | 98% |
| Sánchez et al. [ | 5 | 98.43% | 97.59% | 94.55% |
| Proposed method | 5 | 99% | 98.5% | 98% |
Table of cross-validation results (ORL database).
| Experiment | Experiment | Experiment | Experiment | Experiment | Average |
|---|---|---|---|---|---|
| 95.42% | 94.58% | 96.67% | 97.92% | 97.92% | 96.50% |
Table of comparison of optimized results (FERET database).
| Method | Number of persons | Number of images | Recognition rate |
|---|---|---|---|
| Wang et al. [ | 50 | 7 | 86% |
| Proposed method | 50 | 7 | 98% |
| Wang et al. [ | 100 | 7 | 79.7% |
| Proposed method | 100 | 7 | 92.33% |
| Wang et al. [ | 150 | 7 | 79.1% |
| Proposed method | 150 | 7 | 92% |
| Wang et al. [ | 200 | 7 | 75.7% |
| Proposed method | 200 | 7 | 88.17% |
Table of cross-validation results (FERET database).
| Number of persons | Experiment | Experiment | Experiment | Experiment | Experiment | Average |
|---|---|---|---|---|---|---|
| 50 | 93.33% | 95.33% | 94.00% | 94.67% | 94.67% | 94.40% |
| 100 | 83.67% | 88.33% | 89.00% | 91.33% | 92.00% | 88.87% |
| 150 | 79.78% | 86.44% | 87.78% | 90.22% | 89.33% | 86.71% |
| 200 | 76.17% | 83.00% | 82.83% | 84.50% | 85.83% | 82.47% |
Table of comparison of optimized results (iris database).
| Method | Images for training | Recognition rate | ||
|---|---|---|---|---|
| Best (%) | Average (%) | Worst (%) | ||
| Sánchez and Melin [ | Up to 80% | 99.68% | 98.68% | 97.40% |
| Sánchez et al. [ | Up to 80% | 99.13% | 98.22% | 96.59% |
| Proposed method | Up to 80% | 100% | 99.31% | 98.70% |
Table of cross-validation results (iris database).
| Experiment | Experiment | Experiment | Experiment | Experiment | Experiment | Average |
|---|---|---|---|---|---|---|
| 98.27% | 99.13% | 98.27% | 96.97% | 97.84% | 96.97% | 97.91% |
Values of ear database (test #1).
| Method |
| Mean | Standard | Error standard deviation of the mean | Estimated |
|
| Degree of freedom |
|---|---|---|---|---|---|---|---|---|
| Sánchez and Melin [ | 30 | 0.0030 | 0.0121 | 0.0022 | 0.003 | 1.38 | 0.1769 | 29 |
| Proposed method | 30 | 0 | 0 | 0 | ||||
|
| ||||||||
| Sánchez et al. [ | 30 | 0.00108 | 0.00421 | 0.00077 | 0.001082 | 1.41 | 0.169 | 29 |
| Proposed method | 30 | 0 | 0 | 0 | ||||
Figure 31Sample distribution (test #1, ear database).
Values of ORL database (test #1).
| Method |
| Mean | Standard | Error standard deviation of the mean | Estimated |
|
| Degree |
|---|---|---|---|---|---|---|---|---|
| Mendoza et al. [ | 4 | 94.69 | 2.58 | 1.3 | −5.31 | −4.12 | 0.026 | 3 |
| Proposed method | 4 | 100 | 0 | 0 | ||||
|
| ||||||||
| Sánchez et al. [ | 5 | 99.27 | 0.676 | 0.30 | −0.73 | −2.42 | 0.072 | 4 |
| Proposed method | 5 | 100 | 0 | 0 | ||||
Figure 32Sample distribution (test #1, ORL database).
Values of FERET database.
| Method |
| Mean | Standard | Error standard deviation of the mean | Estimated |
|
| Degree of freedom |
|---|---|---|---|---|---|---|---|---|
| Wang et al. [ | 4 | 80.13 | 4.29 | 2.1 | −12.50 | −4.24 | 0.00547 | 6 |
| Proposed method | 4 | 92.63 | 4.05 | 2.0 |
Figure 33Sample distribution (FERET database).
Values of iris database.
| Method |
| Mean | Standard | Error standard deviation of the mean | Estimated |
|
| Degree of freedom |
|---|---|---|---|---|---|---|---|---|
| Sánchez and Melin [ | 20 | 98.68 | 0.779 | 0.17 | −0.624 | −3.18 | 0.0035 | 29 |
| Proposed method | 20 | 99.30 | 0.407 | 0.091 | ||||
|
| ||||||||
| Sánchez et al. [ | 20 | 98.22 | 0.758 | 0.17 | −1.083 | −5.62 | 1.8623 | 38 |
| Proposed method | 20 | 99.30 | 0.407 | 0.091 | ||||
Figure 34Sample distribution (iris database).
Values of ear database (test #2).
| Method |
| Mean | Standard | Error standard deviation of the mean | Estimated |
|
| Degrees of freedom |
|---|---|---|---|---|---|---|---|---|
| Sánchez and Melin [ | 30 | 0.0518 | 0.0345 | 0.0063 | 0.01328 | 2.09 | 0.045 | 29 |
| Proposed method | 30 | 0.03853 | 0.00449 | 0.00082 | ||||
|
| ||||||||
| Sánchez et al. [ | 30 | 0.03182 | 0.00462 | 0.00084 | −0.00671 | −5.70 | 4.1926 | 57 |
| Proposed method | 30 | 0.03853 | 0.00449 | 0.00082 | ||||
Figure 35Sample distribution (test #2, ear database).
Values of ORL database (test #2).
| Method |
| Mean | Standard | Error standard deviation of the mean | Estimated |
|
| Degrees of freedom |
|---|---|---|---|---|---|---|---|---|
| Azami et al. [ | 5 | 95.91 | 0.409 | 0.18 | −2.590 | −8.96 | 1.9091 | 8 |
| Proposed method | 5 | 98.50 | 0.500 | 0.22 | ||||
|
| ||||||||
| Ch'Ng et al. [ | 4 | 94.75 | 1.19 | 0.60 | −3.750 | −5.90 | 0.004 | 3 |
| Proposed method | 5 | 98.50 | 0.500 | 0.22 | ||||
|
| ||||||||
| Sánchez et al. [ | 5 | 98.30 | 0.447 | 0.20 | −0.20 | −0.67 | 0.523 | 8 |
| Proposed method | 5 | 98.50 | 0.500 | 0.22 | ||||
|
| ||||||||
| Sánchez et al. [ | 5 | 97.59 | 1.71 | 0.76 | −0.94 | −1.15 | 0.314 | 4 |
| Proposed method | 5 | 98.50 | 0.500 | 0.22 | ||||
Figure 36Sample database (test #2, ORL database).