| Literature DB >> 30275366 |
Huasheng Huang1,2, Jizhong Deng3,4, Yubin Lan5,6, Aqing Yang7, Xiaoling Deng8,9, Sheng Wen10,11, Huihui Zhang12, Yali Zhang13,14.
Abstract
Chemical control is necessary in order to control weed infestation and to ensure a rice yield. However, excessive use of herbicides has caused serious agronomic and environmental problems. Site specific weed management (SSWM) recommends an appropriate dose of herbicides according to the weed coverage, which may reduce the use of herbicides while enhancing their chemical effects. In the context of SSWM, the weed cover map and prescription map must be generated in order to carry out the accurate spraying. In this paper, high resolution unmanned aerial vehicle (UAV) imagery were captured over a rice field. Different workflows were evaluated to generate the weed cover map for the whole field. Fully convolutional networks (FCN) was applied for a pixel-level classification. Theoretical analysis and practical evaluation were carried out to seek for an architecture improvement and performance boost. A chessboard segmentation process was used to build the grid framework of the prescription map. The experimental results showed that the overall accuracy and mean intersection over union (mean IU) for weed mapping using FCN-4s were 0.9196 and 0.8473, and the total time (including the data collection and data processing) required to generate the weed cover map for the entire field (50 × 60 m) was less than half an hour. Different weed thresholds (0.00⁻0.25, with an interval of 0.05) were used for the prescription map generation. High accuracies (above 0.94) were observed for all of the threshold values, and the relevant herbicide saving ranged from 58.3% to 70.8%. All of the experimental results demonstrated that the method used in this work has the potential to produce an accurate weed cover map and prescription map in SSWM applications.Entities:
Keywords: FCN; UAV; prescription map; semantic labeling; weed mapping
Year: 2018 PMID: 30275366 PMCID: PMC6209949 DOI: 10.3390/s18103299
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The photograph of the studied rice field.
Specification for the dataset.
| Name | Flight Date | Number of Patches | Description |
|---|---|---|---|
| D02-1 | 2nd October 2017 | 182 | Divided from the ortho-mosaic imagery |
| D10-1 | 10th October 2017 | 182 | Divided from the ortho-mosaic imagery |
| D02-2 | 2nd October 2017 | 648 | Divided from the collected imagery |
| D10-2 | 10th October 2017 | 600 | Divided from the collected imagery |
Figure 2Three image-ground truth (GT) label pairs in the dataset: (a) images in the dataset; (b) corresponding GT labels.
Figure 3Two workflows to produce the weed cover map for the whole field. (a) The workflow of mosaicking-labeling; (b) the workflow of labeling-mosaicking.
Figure 4The illustration of the architecture of fully convolutional networks (FCN): (a) architecture of classical FCN-8s; (b) architecture of the modified FCN-4s.
Experimental results of different workflows. The speed was measured using the total time required to generate the weed cover map for the whole field, including data collection and data processing. Mean IU-mean intersection over union.
| Workflow | Overall Accuracy | Mean IU | Speed |
|---|---|---|---|
| Mosaicking–labeling | 0.9096 | 0.8303 | 24.8 min |
| Labeling–mosaicking | 0.9074 | 0.8264 | 32.5 min |
Experimental results on different semantic labeling approaches. Speed-1 was measured using the inference time for a single imagery (1000 × 1000 pixels), and speed-2 was measured using the total time required to generate the weed cover map for the whole field, including data collection and data processing. FCN—fully convolutional networks.
| Method | Overall Accuracy | Mean IU | Speed-1 | Speed-2 |
|---|---|---|---|---|
| FCN-8s | 0.9096 | 0.8303 | 0.413 s | 24.8 min |
| Deeplab | 0.9191 | 0.8460 | 5.279 s | 39.6 min |
| FCN-4s | 0.9196 | 0.8473 | 0.356 s | 24.7 min |
Confusion matrix by different semantic labeling approaches. GT—ground truth.
| Method | GT/Predicted Category | Others | Rice | Weeds |
|---|---|---|---|---|
| FCN-8s | others |
| 0.042 | 0.018 |
| rice | 0.037 |
| 0.069 | |
| weeds | 0.078 | 0.027 |
| |
| Deeplab | others |
| 0.044 | 0.034 |
| rice | 0.023 |
| 0.052 | |
| weeds | 0.056 | 0.036 |
| |
| FCN-4s | others |
| 0.030 | 0.031 |
| rice | 0.037 |
| 0.049 | |
| weeds | 0.055 | 0.039 |
|
Figure 5Weed cover maps output by different approaches. (a) Ortho-mosaicked imagery. (b) Corresponding GT-labels. The areas outside the studied plot were masked out (in black) and ignored in the training and evaluation. (c) Output by FCN-8s. (d) Output by Deeplab. (e) Output by FCN-4s.
Figure 6The accuracy curve with different weed thresholds.
Herbicide saving with different weed thresholds.
| Threshold | Treatment Area | Herbicide Saving |
|---|---|---|
| 0.00 | 41.7% | 58.3% |
| 0.05 | 35.9% | 64.1% |
| 0.10 | 33.6% | 66.4% |
| 0.15 | 31.9% | 68.1% |
| 0.20 | 30.4% | 69.6% |
| 0.25 | 29.2% | 70.8% |
Figure 7Prescription map generated with different weed thresholds. (a–c) Prescription map generated from the GT label using the weed thresholds of 0.0, 0.1 and 0.2. (d–f) Prescription map generated from the output weed cover map using the thresholds of 0.0, 0.1 and 0.2. From reference system datum WGS84.