| Literature DB >> 25133086 |
Ameni Yangui Jammoussi1, Sameh Fakhfakh Ghribi1, Dorra Sellami Masmoudi1.
Abstract
Recently, many classes of objects can be efficiently detected by the way of machine learning techniques. In practice, boosting techniques are among the most widely used machine learning for various reasons. This is mainly due to low false positive rate of the cascade structure offering the possibility to be trained by different classes of object. However, it is especially used for face detection since it is the most popular sub-problem within object detection. The challenges of Adaboost based face detector include the selection of the most relevant features from a large feature set which are considered as weak classifiers. In many scenarios, however, selection of features based on lowering classification errors leads to computation complexity and excess of memory use. In this work, we propose a new method to train an effective detector by discarding redundant weak classifiers while achieving the pre-determined learning objective. To achieve this, on the one hand, we modify AdaBoost training so that the feature selection process is not based any more on the weak learner's training error. This is by incorporating the Genetic Algorithm (GA) on the training process. On the other hand, we make use of the Joint Integral Histogram in order to extract more powerful features. Experimental performance on human faces show that our proposed method requires smaller number of weak classifiers than the conventional learning algorithm, resulting in higher learning and faster classification rates. So, our method outperforms significantly state-of-the-art cascade methods in terms of detection rate and false positive rate and especially in reducing the number of weak classifiers per stage.Entities:
Keywords: Adaboost; Face detection; Genetic algorithm; Joint integral histogram
Year: 2014 PMID: 25133086 PMCID: PMC4132457 DOI: 10.1186/2193-1801-3-355
Source DB: PubMed Journal: Springerplus ISSN: 2193-1801
Figure 1The attentional cascade structure.
Figure 2JIH illustration on a sample face image of two possible combinations: f represents the gray level pixel values and g represents the LBP image values and reciprocally.
Figure 3The training process based optimization.
Figure 4Several sample face training images from PIE CMU database.
Comparison of the DR and FPR between JIH and conventional Haar-like features
| Number of features | Haar-like | JIH | ||
|---|---|---|---|---|
| DR | FPR | DR | FPR | |
| 10 | 99.17% | 30.4% | 99,58% | 0% |
| 50 | 99.17% | 1.4% | 99.9% | 0% |
Comparison of the performance results between different possible alternatives
| Constraint |
|
| ||
|---|---|---|---|---|
|
|
|
| ||
| NB | DR | FPR | DR | FPR |
| 10 | 99.37% | 64.8% | 99.37% | 41.6% |
| 20 | 99.17% | 20.4% | 99.17% | 16.8% |
| 50 | 99.17% | 3% | 99.17% | 1.8% |
Figure 5Comparison of the number of selected features between Adaboost and our proposed training procedure.
Figure 6The goal performances are obtained with a small number of generations (42 generations and the latest value with which the algorithm converge is not represented).
Figure 7The goal performances are obtained with big number of generations (more than 450).
Figure 8The obtained detection rate through generations for = 10 and = 40.
Figure 9Example of detection results on BioID database.