| Literature DB >> 33466513 |
Emilio Guirado1,2, Javier Blanco-Sacristán3, Emilio Rodríguez-Caballero4,5, Siham Tabik6, Domingo Alcaraz-Segura7,8, Jaime Martínez-Valderrama1, Javier Cabello2,9.
Abstract
Vegetation generally appears scattered in drylands. Its structure, composition and spatial patterns are key controls of biotic interactions, water, and nutrient cycles. Applying segmentation methods to very high-resolution images for monitoring changes in vegetation cover can provide relevant information for dryland conservation ecology. For this reason, improving segmentation methods and understanding the effect of spatial resolution on segmentation results is key to improve dryland vegetation monitoring. We explored and analyzed the accuracy of Object-Based Image Analysis (OBIA) and Mask Region-based Convolutional Neural Networks (Mask R-CNN) and the fusion of both methods in the segmentation of scattered vegetation in a dryland ecosystem. As a case study, we mapped Ziziphus lotus, the dominant shrub of a habitat of conservation priority in one of the driest areas of Europe. Our results show for the first time that the fusion of the results from OBIA and Mask R-CNN increases the accuracy of the segmentation of scattered shrubs up to 25% compared to both methods separately. Hence, by fusing OBIA and Mask R-CNNs on very high-resolution images, the improved segmentation accuracy of vegetation mapping would lead to more precise and sensitive monitoring of changes in biodiversity and ecosystem services in drylands.Entities:
Keywords: deep-learning; fusion; mask R-CNN; object-based; optical sensors; scattered vegetation; very high-resolution
Year: 2021 PMID: 33466513 PMCID: PMC7796453 DOI: 10.3390/s21010320
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576