| Literature DB >> 35811635 |
Abstract
In remote sensing data processing, cover classification on decimeter-level data is a well-studied but tough subject that has been well-documented. The majority of currently existent works make use of orthographic photographs or orthophotos and digital surface models that go with them (DSMs). Urban land cover classification plays a significant role in the field of remote sensing to enhance the quality of different applications including environment protection, sustainable development, and resource management and planning. Novelty of the research done in this area is focused on extracting features from high-resolution satellite images to be used in the classification process. However, it is well known in machine learning literature that some of the extracted features are irrelevant to the classification process with a negative or no effect on its accuracy. In this work, a genetic algorithm-based feature selection approach is used to enhance the performance of urban land cover classification. Neural networks (NNs) and random forest (RF) classifiers were used to evaluate the proposed approach on a recent urban land cover dataset of nine different classes. Experimental results show that the proposed approach achieved better performance with RF classifier using only 27% of the features. The random forest tree has achieved highest accuracy 84.27%; it is concluded that the RF algorithm is an appropriate algorithm for classifying cover land.Entities:
Year: 2022 PMID: 35811635 PMCID: PMC9262539 DOI: 10.1155/2022/5190193
Source DB: PubMed Journal: Appl Bionics Biomech ISSN: 1176-2322 Impact factor: 1.664
Figure 1The proposed methodology.
Figure 2Binary encoding of the feature selection problem.
Figure 3Generic genetics algorithm.
Figure 4Neural network classifier.
Figure 5Random forest classifier [37].
WEKA genetic search parameters.
| Parameter | Value |
|---|---|
| Population size | 20 |
| Crossover probability | 0.6 |
| Mutation probability | 0.033 |
| Max. generations | 20 |
Profiles of the selected features for different subset evaluation methods.
| Methods | No. of selected features | Reduced feature sets |
|---|---|---|
| CFS | 40 | 2, 3, 6, 8, 10, 11, 16, 20, 22, 30, 31,33, 34, 36, 41, 47, 50, 51, 54, 57, 59,62, 65, 68, 70, 72, 73, 76, 83, 84, 86,89, 93, 94, 110, 118, 134, 136, 146,147 |
| Wrapper (J48 classifier) | 63 | 2, 3, 11, 12, 15, 16, 20, 21, 22, 23, 24, 27, 29, 31, 32, 35, 39, 40, 41, 43, 44, 46,50, 51, 53, 54, 55, 58, 60, 67, 69, 71, 77,89, 92, 96, 97, 98, 99, 100, 105, 109, 110,112, 113, 115, 116, 117, 118, 119, 121,123, 126, 128, 132, 133, 136, 137, 138,142, 143, 145, 147 |
| Wrapper (zero-R classifier) | 13 | 6, 25, 36, 57, 72, 86, 89, 110, 112, 118, 136, 146, 147 |
Figure 6Time to build classification models for different number of features using NN and RF classifiers.
Classification accuracy for NN and RF classifiers and different feature sets.
| Dataset | Accuracy | |
|---|---|---|
| NN classifier | RF classifier | |
| Original (147 features) | 83.63% | 85.40% |
| CFS (40 features) | 83.43% | 86.98% |
| Wrapper, J48 classifier (63 features) | 83.43% | 85.01% |
| Wrapper, zero-R classifier (13 features) | 71.01% | 77.91% |
Precision, recall, and F1-score for the original and the best reduced dataset using RF classifier.
| Class | Precision | Recall | F1-score | |||
|---|---|---|---|---|---|---|
| Original dataset | Reduced (CFS) | Original dataset | Reduced (CFS) | Original dataset | Reduced (CFS) | |
| Concrete | 0.875 | 0.878 | 0.828 | 0.849 | 0.851 | 0.863 |
| Shadow | 0.875 | 0.930 | 0.933 | 0.889 | 0.903 | 0.909 |
| Tree | 0.820 | 0.880 | 0.920 | 0.910 | 0.868 | 0.895 |
| Asphalt | 0.909 | 0.820 | 0.889 | 0.911 | 0.899 | 0.863 |
| Building | 0.810 | 0.825 | 0.876 | 0.876 | 0.842 | 0.850 |
| Grass | 0.892 | 0.900 | 0.795 | 0.867 | 0.841 | 0.883 |
| Pool | 1.00 | 1.00 | 0.714 | 0.786 | 0.833 | 0.880 |
| Car | 0.818 | 0.895 | 0.857 | 0.810 | 0.837 | 0.850 |
| Soil | 0.813 | 0.789 | 0.650 | 0.750 | 0.717 | 0.769 |
| W. avg. | 0.857 | 0.872 | 0.854 | 0.870 | 0.829 | 0.870 |