Literature DB >> 33348752

Robust Building Extraction for High Spatial Resolution Remote Sensing Images with Self-Attention Network.

Dengji Zhou1,2, Guizhou Wang1, Guojin He1, Tengfei Long1, Ranyu Yin1,2, Zhaoming Zhang1, Sibao Chen3, Bin Luo3.   

Abstract

Building extraction from high spatial resolution remote sensing images is a hot spot in the field of remote sensing applications and computer vision. This paper presents a semantic segmentation model, which is a supervised method, named Pyramid Self-Attention Network (PISANet). Its structure is simple, because it contains only two parts: one is the backbone of the network, which is used to learn the local features (short distance context information around the pixel) of buildings from the image; the other part is the pyramid self-attention module, which is used to obtain the global features (long distance context information with other pixels in the image) and the comprehensive features (includes color, texture, geometric and high-level semantic feature) of the building. The network is an end-to-end approach. In the training stage, the input is the remote sensing image and corresponding label, and the output is probability map (the probability that each pixel is or is not building). In the prediction stage, the input is the remote sensing image, and the output is the extraction result of the building. The complexity of the network structure was reduced so that it is easy to implement. The proposed PISANet was tested on two datasets. The result shows that the overall accuracy reached 94.50 and 96.15%, the intersection-over-union reached 77.45 and 87.97%, and F1 index reached 87.27 and 93.55%, respectively. In experiments on different datasets, PISANet obtained high overall accuracy, low error rate and improved integrity of individual buildings.

Entities:  

Keywords:  building extraction; deep learning; high resolution image; semantic segmentation

Year:  2020        PMID: 33348752     DOI: 10.3390/s20247241

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


  2 in total

1.  AGs-Unet: Building Extraction Model for High Resolution Remote Sensing Images Based on Attention Gates U Network.

Authors:  Mingyang Yu; Xiaoxian Chen; Wenzhuo Zhang; Yaohui Liu
Journal:  Sensors (Basel)       Date:  2022-04-11       Impact factor: 3.847

2.  Extraction of blue roofs using BRSAM and the newly created spectral index derived from WorldView-2/3 imagery.

Authors:  Huaipeng Liu; Xiaoqing Zuo
Journal:  Heliyon       Date:  2022-08-28
  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.