| Literature DB >> 32737334 |
Katsunori Mizuno1, Kei Terayama2,3,4, Seiichiro Hagino5, Shigeru Tabeta5, Shingo Sakamoto6, Toshihiro Ogawa6, Kenichi Sugimoto6, Hironobu Fukami7.
Abstract
Over the last 3 decades, a large portion of coral cover has been lost around the globe. This significant decline necessitates a rapid assessment of coral reef health to enable more effective management. In this paper, we propose an efficient method for coral cover estimation and demonstrate its viability. A large-scale 3-D structure model, with resolutions in the x, y and z planes of 0.01 m, was successfully generated by means of a towed optical camera array system (Speedy Sea Scanner). The survey efficiency attained was 12,146 m2/h. In addition, we propose a segmentation method utilizing U-Net architecture and estimate coral coverage using a large-scale 2-D image. The U-Net-based segmentation method has shown higher accuracy than pixelwise CNN modeling. Moreover, the computational cost of a U-Net-based method is much lower than that of a pixelwise CNN-based one. We believe that an array of these survey tools can contribute to the rapid assessment of coral reefs.Entities:
Mesh:
Year: 2020 PMID: 32737334 PMCID: PMC7395762 DOI: 10.1038/s41598-020-69400-5
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Speedy Sea Scanner (SSS). Six cameras reside on the towed body. The attitude is maintained by the tailplane.
Figure 23-D structure model: the top is a whole view and the bottom an enlarged view of the inside of the red rectangle (above).
Figure 3The 2-D image (orthophoto) at various scales. The 2-D image is overlaid on the hill-shaded topography generated from MBES data. The survey was conducted in the northern coastal area of Kumejima.
Figure 4Combined DEM. DEMSSS (inside the black border) is overlapped onto the DEMMBES. The right-hand images comprise an enlarged view of the same location (red rectangle).
Figure 5Left: distribution map of the elevation difference. Right: histogram of the elevation difference at the pixel level.
Figure 6Prediction examples by U-Net and pixelwise CNN. The images in the leftmost column are original images; the second column comprises images processed by color labeling; the third and fourth columns are prediction results by U-Net with CC and DA and pixelwise CNN (window = 64 × 64 pixels), respectively; the white areas in the rightmost column show the manually-labeled coral areas.
Performances of U-Net and pixelwise CNN based on five-fold cross-validation.
| Accuracy | Recall | Precision | F-measure | |
|---|---|---|---|---|
| U-Net w/o CC and w/o DA | 0.901 | 0.710 | 0.785 | 0.740 |
| U-Net w/o CC and w DA | 0.908 | 0.748 | 0.791 | 0.763 |
| U-Net w CC and w/o DA | 0.902 | 0.718 | 0.783 | 0.743 |
| U-Net w CC and w DA | 0.910 | 0.767 | 0.788 | 0.772 |
| Pixelwise CNN (input size: 32 × 32) | 0.872 | 0.586 | 0.742 | 0.644 |
| Pixelwise CNN (input size: 48 × 48) | 0.877 | 0.614 | 0.745 | 0.666 |
| Pixelwise CNN (input size: 64 × 64) | 0.880 | 0.656 | 0.750 | 0.688 |
| Pixelwise CNN (input size: 98 × 98) | 0.886 | 0.752 | 0.711 | 0.724 |
| Pixelwise CNN (input size: 128 × 128) | 0.880 | 0.733 | 0.714 | 0.719 |
| Pixelwise CNN (input size: 160 × 160) | 0.891 | 0.739 | 0.739 | 0.729 |
Figure 7The relationship between prediction performance and prediction time: (a) the dotted lines correspond to the accuracy (blue) and F-measure (orange) of U-Net with CC and DA. The blue and orange lines show the accuracy and F-measure of pixelwise CNN with different window sizes; (b) Prediction time per image (512 × 512 pixels) using U-Net and pixelwise CNNs. The dashed line indicates the prediction time by U-Net, which was 0.057 s.
Figure 8Distribution map of coral cover prediction. The color gradation shows the percent coral cover.