| Literature DB >> 25284960 |
Mariana Belgiu1, Lucian Dr Guţ2.
Abstract
Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea that classification is dependent on segmentation is challenged by our unexpected results, casting doubt on the value of pursuing 'optimal segmentation'. Our results rather suggest that as long as under-segmentation remains at acceptable levels, imperfections in segmentation can be ruled out, so that a high level of classification accuracy can still be achieved.Entities:
Keywords: Buildings; OBIA; OpenStreetMap; Random forest classifier; Supervised segmentation; Unsupervised segmentation
Year: 2014 PMID: 25284960 PMCID: PMC4183749 DOI: 10.1016/j.isprsjprs.2014.07.002
Source DB: PubMed Journal: ISPRS J Photogramm Remote Sens ISSN: 0924-2716 Impact factor: 8.979
Summary of the three test areas and characteristics of the corresponding satellite imagery.
| Test area | Imagery | Spatial resolution | Location | Dimensions (pixels) | Band composition |
|---|---|---|---|---|---|
| A | QuickBird | 0.6 | Salzburg city – down-town area | 3300 × 3300 | Blue, green, red, NIR |
| B | WorldView-2 | 0.5 | Salzburg city – industrial area | 3426 × 3211 | Coastal blue, blue, green, yellow, red, red-edge, NIR1, NIR2 |
| C | WorldView-2 | 0.5 | Salzburg city – down-town area | 4282 × 3875 | As above |
Fig. 1Location of the test areas in Salzburg, Austria. Images are displayed as a true color composition (RGB). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Metrics used for the evaluation of segmentation.
| Metrics | Formula | Explanations | Authors |
|---|---|---|---|
| Over-segmentation (OSeg) | Range [0,1] OSeg = 0 → perfect segmentation | ||
| Under-segmentation (USeg) | Range [0,1] USeg = 0 → perfect segmentation | ||
| Root mean square (Dij) | Range [0,1]; 0-perfect match | ||
| Area fit index (AFI) | AFI = 0.0 → perfect overlap | ||
| Quality rate (Qr) | Range [0,1]; Qr 1 → perfect match |
Summary of the reference data: number of training samples used to train the RF classifier and of validation data used to validate the classification accuracy of the “buildings class” (BC) and “other classes” (OC).
| Test area | Imagery | Training data | Validation data | ||
|---|---|---|---|---|---|
| BC | OC | BC | OC | ||
| A | QuickBird | 128 | 164 | 85 | 85 |
| B | WorldView-2 | 107 | 104 | 85 | 85 |
| C | WorldView-2 | 130 | 134 | 85 | 85 |
Overview of optimal SPs estimated using the SAA and ESP2 tools.
| Scale parameter | Test area A | Test area B | Test area C | |||
|---|---|---|---|---|---|---|
| Small buildings | Large buildings | Small buildings | Large buildings | Small buildings | Large buildings | |
| SAA | 110 | 491 | 150 | 341 | 170 | 490 |
| ESP2 | 133 | 491 | 152 | 400 | 186 | 501 |
Number of image objects obtained for the three test areas using each of the three approaches.
| Image Objects | |||
|---|---|---|---|
| Test area A | Test area B | Test area C | |
| SAA | 9083 | 5501 | 7060 |
| ESP2 | 6549 | 5398 | 6030 |
| SOP | 3918 | 3848 | 6491 |
Segmentation evaluation metrics. The objects generated as buildings using the SAA tool were used as reference data for evaluating the building objects generated by the ESP2 and SOP tools. Detailed explanations of the segmentation evaluation metrics are provided in Table 2.
| No. ref | AFI | Dij | Missing rate | No. of misses | OSeg | Overlap (sq.m.) | Qr | USeg | |
|---|---|---|---|---|---|---|---|---|---|
| Area A SAA vs. ESP2 | 4115 | 0.13 | 0.10 | 0.016 | 689 | 0.14 | 58,580,625 | 0.84 | 0.01 |
| Area A SAA vs. SOP | 4115 | 0.64 | 0.46 | 0.63 | 2600 | 0.66 | 23,324,875 | 0.33 | 0.05 |
| Area B SAA vs. ESP2 | 3142 | 0.02 | 0.01 | 0.01 | 40 | 0.02 | 83,696,550 | 0.97 | 0.0004 |
| Area B SAA vs. SOP | 3142 | 0.52 | 0.39 | 0.53 | 1693 | 0.54 | 38,570,675 | 0.43 | 0.062 |
| Area C SAA vs. ESP2 | 3976 | 0.09 | 0.07 | 0.06 | 239 | 0.10 | 137,576,800 | 0.88 | 0.007 |
| Area C SAA vs. SOP | 3976 | 0.53 | 0.39 | 0.42 | 1681 | 0.55 | 69,163,825 | 0.44 | 0.039 |
Fig. 3Building classification results for test area A.
Fig. 4Building classification results for Test Area B.
Fig. 5Building classification results for test area B.
Overall Accuracies (OA) and Kappa coefficients (Kappa) yielded by the SAA, ESP2 and SOP methods for the three evaluated test areas.
| Test area A | Test area B | Test area C | |||||||
|---|---|---|---|---|---|---|---|---|---|
| SAA | ESP 2 | SOP | SAA | ESP2 | SOP | SAA | ESP2 | SOP | |
| OA (%) | 83.5 | 83.5 | 84.1 | 84.1 | 85.2 | 82.3 | 84.1 | 86.4 | 86.4 |
| Kappa | 0.67 | 0.67 | 0.68 | 0.68 | 0.7 | 0.64 | 0.68 | 0.72 | 0.72 |
Fig. 2Segmentation results for a subset of test area B. (A) SAA; (B) ESP2; (C) SOP (true color composition). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)