| Literature DB >> 29304512 |
Muhammad Ahmad1, Stanislav Protasov1, Adil Mehmood Khan1, Rasheed Hussain2, Asad Masood Khattak3, Wajahat Ali Khan4.
Abstract
Hyperspectral image classification with a limited number of training samples without loss of accuracy is desirable, as collecting such data is often expensive and time-consuming. However, classifiers trained with limited samples usually end up with a large generalization error. To overcome the said problem, we propose a fuzziness-based active learning framework (FALF), in which we implement the idea of selecting optimal training samples to enhance generalization performance for two different kinds of classifiers, discriminative and generative (e.g. SVM and KNN). The optimal samples are selected by first estimating the boundary of each class and then calculating the fuzziness-based distance between each sample and the estimated class boundaries. Those samples that are at smaller distances from the boundaries and have higher fuzziness are chosen as target candidates for the training set. Through detailed experimentation on three publically available datasets, we showed that when trained with the proposed sample selection framework, both classifiers achieved higher classification accuracy and lower processing time with the small amount of training data as opposed to the case where the training samples were selected randomly. Our experiments demonstrate the effectiveness of our proposed method, which equates favorably with the state-of-the-art methods.Entities:
Mesh:
Year: 2018 PMID: 29304512 PMCID: PMC5756090 DOI: 10.1371/journal.pone.0188996
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Sample distance from boundary.
Fig 2Overall classification accuracy for PSVM, SVM, PFKNN and FKNN for both Indian Pines, Pavia University, and Pavia Centre datasets.
Fig 3Kappa coefficient for PSVM, SVM, PFKNN and FKNN for both Indian Pines, Pavia University, and Pavia Centre datasets.
Fig 4Computational time for PSVM, SVM, PFKNN and FKNN for both Indian Pines, Pavia University, and Pavia Centre datasets.
Fig 5Classification maps of Pavia University (PU) with different number of training samples i.e. 5%, 10%, 15%, 20%, and 25% to train FKNN, SVM, PFKNN, and PSVM respectively.
Fig 6Classification maps of Indian Pines (IP) with different number of training samples i.e. 5%, 10%, 15%, 20%, and 25% to train FKNN, SVM, PFKNN, and PSVM respectively.
Fig 7Classification maps of Pavia Centre (PC) with different number of training samples i.e. 5%, 10%, 15%, 20%, and 25% to train PFKNN, and PSVM.
Indian Pines.
| Classifier | PPV | FDR | FOR | LRN |
|---|---|---|---|---|
| 0.47158 | 0.52841 | 0.02697 | 0.37514 | |
| 0.74105 | 0.25894 | 0.01023 | 0.10667 | |
| 0.65470 | 0.34529 | 0.02046 | 0.30210 | |
| 0.97889 | 0.02110 | 0.00184 | 0.03586 | |
| 0.52308 | 0.47691 | 0.02380 | 0.40514 | |
| 0.83876 | 0.16123 | 0.00283 | 0.05639 | |
| 0.72716 | 0.27283 | 0.01560 | 0.22258 | |
| 0.98987 | 0.01012 | 0.00079 | 0.01493 | |
| 0.55085 | 0.44914 | 0.02244 | 0.36847 | |
| 0.85093 | 0.14906 | 0.00190 | 0.03474 | |
| 0.75154 | 0.24845 | 0.01389 | 0.22821 | |
| 0.99135 | 0.00864 | 0.00067 | 0.01559 | |
| 0.56465 | 0.43534 | 0.02129 | 0.24515 | |
| 0.98529 | 0.01470 | 0.00120 | 0.01934 | |
| 0.79040 | 0.20959 | 0.01209 | 0.19190 | |
| 0.99195 | 0.00804 | 0.00057 | 0.01339 | |
| 0.59093 | 0.40906 | 0.02075 | 0.26579 | |
| 0.98753 | 0.01246 | 0.00095 | 0.01565 | |
| 0.80667 | 0.19332 | 0.01131 | 0.17088 | |
| 0.99345 | 0.00654 | 0.00040 | 0.00821 | |
Pavia University.
| Classifier | PPV | FDR | FOR | LRN |
|---|---|---|---|---|
| 0.80161 | 0.19838 | 0.02625 | 0.16163 | |
| 0.81328 | 0.18671 | 0.01236 | 0.14526 | |
| 0.87181 | 0.12818 | 0.01276 | 0.11241 | |
| 0.93518 | 0.06481 | 0.00320 | 0.04772 | |
| 0.83173 | 0.16826 | 0.02161 | 0.14281 | |
| 0.88934 | 0.11065 | 0.00642 | 0.08669 | |
| 0.89917 | 0.10082 | 0.01050 | 0.09875 | |
| 0.96010 | 0.03989 | 0.00216 | 0.02670 | |
| 0.82923 | 0.17076 | 0.02237 | 0.12797 | |
| 0.90929 | 0.09070 | 0.00512 | 0.05990 | |
| 0.90132 | 0.09867 | 0.00992 | 0.09650 | |
| 0.96912 | 0.03087 | 0.00149 | 0.02035 | |
| 0.83511 | 0.16488 | 0.02157 | 0.12634 | |
| 0.93043 | 0.06956 | 0.00396 | 0.03985 | |
| 0.90577 | 0.09422 | 0.00919 | 0.09067 | |
| 0.97383 | 0.02616 | 0.00109 | 0.01718 | |
| 0.84384 | 0.15615 | 0.02047 | 0.11798 | |
| 0.93864 | 0.06135 | 0.00349 | 0.03499 | |
| 0.91339 | 0.08660 | 0.00843 | 0.08203 | |
| 0.97341 | 0.02658 | 0.00098 | 0.01803 | |
Pavia Centre.
| Classifier | PPV | FDR | FOR | LRN |
|---|---|---|---|---|
| 0.92469 | 0.07530 | 0.00297 | 0.08148 | |
| 0.94539 | 0.05460 | 0.00187 | 0.05048 | |
| 0.93624 | 0.06375 | 0.00261 | 0.07247 | |
| 0.96135 | 0.03864 | 0.00138 | 0.03626 | |
| 0.94072 | 0.05927 | 0.00237 | 0.06618 | |
| 0.96455 | 0.03544 | 0.00128 | 0.03479 | |
| 0.94380 | 0.05619 | 0.00225 | 0.06276 | |
| 0.96640 | 0.03359 | 0.00120 | 0.03250 | |
| 0.94339 | 0.05660 | 0.00216 | 0.05917 | |
| 0.96960 | 0.03039 | 0.00113 | 0.03117 | |
Fig 8Sensitivity for each class classification analysis on Indian Pines, Pavia University, and Pavia Centre, for both PFKNN and PSVM for all three datasets.
Fig 9Specificity for each class classification analysis on Indian Pines, Pavia University, and Pavia Centre, for both PFKNN and PSVM for all three datasets.
Indian Pines dataset.
| Technique | Overall | kappa ( |
|---|---|---|
| SF | 78.14% | 75.17% |
| SS | 78.78% | 71.51% |
| SFS | 82.77% | 80.55% |
| MLL | ||
| MLL-Seg | ||
| MLR-RS | 75.01% | 71.49% |
| MLR-MI | 72.14% | 68.27% |
| MLR-BT | 75.75% | 72.27% |
| MLR-MBT | 75.73% | 72.22% |
| OS | 81.68% | 79.25% |
| RS | 81.54% | 79.89% |
| MPM | 85.42% | 83.31% |
| LBP | ||
| RS | 85.33% | 83.26% |
| MBT | ||
| BT | ||
| MI | 87.02% | 85.23% |
| LORSAL | 82.60% | 80.14% |
| FKNN | 67.62% | 60.57% |
| SVM | 78.51% | 75.44% |
| PFKNN | ||
| PSVM | ||
1Adseg-AddFeat (SF),
2Adseg-Addsamp (SS),
33Adseg-AddFeat + AddSamp (SFS),
4Multilevel Logistic (MLL),
5Multilevel Logistic over Segmentation Maps (MLL-Seg),
6Multinomial Logistic Regression for Random Selection (MLL-RS),
7Multinomial Logistic Regression for Mutual Information (MLR-MI),
8Multinomial Logistic Regression for Breaking Ties (MLR-BT),
9Multinomial Logistic Regression for Modified Breaking Ties (MLR-MBT),
10Over Segmentation Maps (OS),
11Redefined Segmentation Maps (RS),
12Maximum Posteriori Marginal (MPM),
13Maximum Posteriori Marginal based Loopy Belief Propagation (LBP),
14Maximum Posteriori Marginal and Loopy Belief Propagation based Random Selection (RS),
15Maximum Posteriori Marginal and Loopy Belief Propagation based Modified Breaking Ties (MBT),
16Maximum Posteriori Marginal and Loopy Belief Propagation based Breaking Ties (BT),
17Maximum Posteriori Marginal and Loopy Belief Propagation based Mutual Information (MT),
18Logistic Regression via Variable Splitting and Augmented Lagrangian Algorithm (LORSAL),
19Random Selection (FKNN),
20Random Selection (SVM),
21Hardly predicted (PFKNN), and
22Hardly predicted (PSVM).
Pavia University dataset.
| Technique | Overall | kappa ( |
|---|---|---|
| SF | 90.71% | 88.05% |
| SS | 86.58% | 82.73% |
| SFS | ||
| MLL | 85.57% | 81.80% |
| MLL-Seg | 85.78% | 82.05% |
| MLR-RS | 86.61% | 82.49% |
| MLR-MI | 85.88% | 81.50% |
| MLR-BT | 85.63% | 81.21% |
| MLR-MBT | 85.24% | 80.70% |
| OS | ||
| RS | ||
| MPM | 85.78% | 82.05% |
| LBP | —–% | —–% |
| RS | ||
| MBT | ||
| BT | ||
| MI | ||
| LORSAL | ||
| FKNN | 86.26% | 81.38% |
| SVM | ||
| PFKNN | ||
| PSVM | ||
1Adseg-AddFeat (SF),
2Adseg-Addsamp (SS),
33Adseg-AddFeat + AddSamp (SFS),
4Multilevel Logistic (MLL),
5Multilevel Logistic over Segmentation Maps (MLL-Seg),
6Multinomial Logistic Regression for Random Selection (MLL-RS),
7Multinomial Logistic Regression for Mutual Information (MLR-MI),
8Multinomial Logistic Regression for Breaking Ties (MLR-BT),
9Multinomial Logistic Regression for Modified Breaking Ties (MLR-MBT),
10Over Segmentation Maps (OS),
11Redefined Segmentation Maps (RS),
12Maximum Posteriori Marginal (MPM),
13Maximum Posteriori Marginal based Loopy Belief Propagation (LBP),
14Maximum Posteriori Marginal and Loopy Belief Propagation based Random Selection (RS),
15Maximum Posteriori Marginal and Loopy Belief Propagation based Modified Breaking Ties (MBT),
16Maximum Posteriori Marginal and Loopy Belief Propagation based Breaking Ties (BT),
17Maximum Posteriori Marginal and Loopy Belief Propagation based Mutual Information (MT),
18Logistic Regression via Variable Splitting and Augmented Lagrangian Algorithm (LORSAL),
19Random Selection (FKNN),
20Random Selection (SVM),
21Hardly predicted (PFKNN), and
22Hardly predicted (PSVM).