| Literature DB >> 26185494 |
Zahra Sadeghi1, Babak Nadjar Araabi1, Majid Nili Ahmadabadi1.
Abstract
It has been argued that concepts can be perceived at three main levels of abstraction. Generally, in a recognition system, object categories can be viewed at three levels of taxonomic hierarchy which are known as superordinate, basic, and subordinate levels. For instance, "horse" is a member of subordinate level which belongs to basic level of "animal" and superordinate level of "natural objects." Our purpose in this study is to take an investigation into visual features at each taxonomic level. We first present a recognition tree which is more general in terms of inclusiveness with respect to visual representation of objects. Then we focus on visual feature definition, that is, how objects from the same conceptual category can be visually represented at each taxonomic level. For the first level we define global features based on frequency patterns to illustrate visual distinctions among artificial and natural. In contrast, our approach for the second level is based on shape descriptors which are defined by recruiting moment based representation. Finally, we show how conceptual knowledge can be utilized for visual feature definition in order to enhance recognition of subordinate categories.Entities:
Mesh:
Year: 2015 PMID: 26185494 PMCID: PMC4491560 DOI: 10.1155/2015/905421
Source DB: PubMed Journal: Comput Intell Neurosci
Levels of abstraction.
| Level of taxonomy | Example |
|---|---|
| The superordinate level | Animal |
| The basic level | Dog |
| The subordinate level | Reriever |
Figure 1Taxonomic structure of recognition used in this paper. A and P refer to the subcategories of animal and plant correspondingly.
Object categories in taxonomic structure.
| Dataset 1 | Dataset 2 | Dataset 3 | ||||||
|---|---|---|---|---|---|---|---|---|
| Superordinate level | Natural | Artificial | Natural | Artificial | Natural | |||
| Basic level | Animal | Plant | Animal | Plant | Animal | Plant | ||
|
| ||||||||
| Subordinate level | Flamingo, | Sunflower, | Obj1, obj3, | Bird, | Bonsai, | Balloon, | Bat, | Apple, |
Figure 2Sample of animal and plant subcategories.
Figure 3Frequency features for all data. In an up-down direction, vertical axis represents the three dimensions defined in (4) to (6).
Clustering evaluation results.
| Dataset 1 | Dataset 2 | |||||||
|---|---|---|---|---|---|---|---|---|
|
| Precision | Accuracy | Recall |
| Precision | Accuracy | Recall | |
| Frequency features |
|
|
|
|
|
|
|
|
| Gabor feature | 71.80 | 72.34 | 62.56 | 72.32 | 62.60 | 62.30 | 62.06 | 62.61 |
| C2 features | 75.91 | 84.33 | 74.07 | 76.01 | 51.27 | 51.54 | 51.27 | 51.01 |
Figure 7Scatter plot for all data represented by frequency features. Artificial and natural images are represented with green and red circles correspondingly.
Figure 8Precision-recall curve. The results are obtained by using different threshold values on the result of fuzzy clustering.
Figure 9Fuzzy membership grade. Each bar shows the degree membership of each data to natural fuzzy cluster.
Figure 4Samples of binary images of objects.
Comparison results of classification on basic conceptual categories. Results are averaged over 10 iterations. Time complexity is averaged over all train samples.
| Features | Dataset 1 | Dataset 3 | #Feature vector dimensions | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Accuracy | Average processing time per sample | Accuracy | Average processing time per sample | ||||||
| Animal class | Plant class | Total (over all test samples) | Animal class | Plant class | Total (over all test samples) | ||||
| C2 | 86.55 | 84.33 | 85.27 | 3.56 | 93.50 |
| 94.50 | 7.23 | 200 |
|
| |||||||||
| HOG | 82.37 | 80.33 | 81.13 |
| 85.66 | 94.66 | 90.16 |
| 128 |
|
| |||||||||
| Moment-based method |
|
|
| 0.1465 |
| 95.16 |
| 0.2902 |
|
Figure 5Eigen matrices associated to (a) flat space, (b) conceptual animal subspace, and (c) conceptual plant subspace.
Figure 6Categorization accuracy (nt: number of training samples). Total number of eigenvectors are equal to the total number of training samples.