| Literature DB >> 36105636 |
Xu Song1,2,3, Hongyu Zhou1, Guoying Liu4, Brian Sheng-Xian Teo2.
Abstract
Cropland extraction from remote sensing images is an essential part of precise digital agriculture services. This paper proposed an SSGNet network of multiscale fused extraction of cropland based on the attention mechanism to address issues with complex cropland feature types in remote sensing images that resulted in blurred boundaries and low accuracy in plot partitioning. The proposed network contains different modules, such as spatial gradient guidance and dilated semantic fusion. It employs the image gradient attention guidance module to fully extract cropland plot features. This causes the feature to be transferred from the encoding layer to the decoding layer, creating layers full of key features within the cropland and making the extracted cropland information more accurate. In addition, this study also solves the problem caused by a large amount of spatial feature information, which losses easily during the downsampling process of continuous convolution in the coding layer. Aiming to solve this issue, we put forward a model for consensus fusion of multiscale spatial features to fuse each-layer feature of the coding layer through dilated convolution with different dilated ratios. This approach was proposed to make the segmentation results more comprehensive and complete. The lab findings showed that the Precision, Recall, MIoU, and F1 score of the multiscale fusion segmentation SSGNet network based on the attention mechanism had achieved 93.46%, 90.91%, 85.54%, and 92.73%, respectively. Its segmentation effect on cropland was better than other semantic segmentation networks and can effectively promote cropland semantic extraction.Entities:
Mesh:
Year: 2022 PMID: 36105636 PMCID: PMC9467744 DOI: 10.1155/2022/2418850
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1UNet network structure.
Figure 2SSGNet network structure.
Figure 3Structure chart of attention mechanism.
Figure 4Model training process.
Figure 5Extraction results of each network model.
Comparison of evaluation indexes for network structure.
| Experimental methods | Recall (%) | Precision (%) | F1 score (%) | MIoU (%) |
|---|---|---|---|---|
| UNet | 89.85 | 88.73 | 89.26 | 80.94 |
| ENet | 90.19 | 91.58 | 90.85 | 83.51 |
| EDFANet | 87.63 | 89.90 | 88.67 | 80.07 |
| HRNet | 88.63 | 90.39 | 89.45 | 81.28 |
| MMUUNet | 91.50 | 90.01 | 90.71 | 83.26 |
| SSGNet |
|
|
|
|
The bold values indicate that the four evaluation indices of the network model proposed in this paper are higher than other models, which indicate that the network has a good segmentation effect.