| Literature DB >> 29329245 |
Zhe Chen1, Zhen Zhang2, Yang Bu3, Fengzhao Dai4, Tanghuai Fan5, Huibin Wang6.
Abstract
Underwater optical environments are seriously affected by various optical inputs, such as artificial light, sky light, and ambient scattered light. The latter two can block underwater object segmentation tasks, since they inhibit the emergence of objects of interest and distort image information, while artificial light can contribute to segmentation. Artificial light often focuses on the object of interest, and, therefore, we can initially identify the region of target objects if the collimation of artificial light is recognized. Based on this concept, we propose an optical feature extraction, calculation, and decision method to identify the collimated region of artificial light as a candidate object region. Then, the second phase employs a level set method to segment the objects of interest within the candidate region. This two-phase structure largely removes background noise and highlights the outline of underwater objects. We test the performance of the method with diverse underwater datasets, demonstrating that it outperforms previous methods.Entities:
Keywords: artificial light guidance; level-set-based object segmentation; optical features; underwater object segmentation
Year: 2018 PMID: 29329245 PMCID: PMC5795476 DOI: 10.3390/s18010196
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Framework of the proposed underwater object segmentation method.
Figure 2Underwater optical features. (a) original underwater image; (b) global intensity contrast; (c) channel variation; (d) intensity-position; (e) red channel contrast.
Figure 3Underwater object candidate regions. (a) artificial light recognition; (b) candidate object region segmentation; (c) our object segmentation. The original images are shown in the first column of Figure 2.
Figure 4Phases of underwater object segmentation and comparison of results. (a) original underwater image; (b) artificial light recognition; (c) candidate object region segmentation; (d) our object segmentation; (e) object segmentation without artificial guidance.
Figure 5Result comparison. (a) underwater image; (b) ground-truth; (c) artificial light recognition; (d) our method; (e) HFT; (f) BGGMM; (g) FRGMM; (h) Kernel_GraphCuts; (i) ROISEG.
Average performance comparison of HFT, BGGMM, FRGMM, Kernel_GraphCuts, ROISEG and our method using diverse underwater image data.
| Method | Pr | TPR | FS | Sim | FPR | PWC | |
|---|---|---|---|---|---|---|---|
| HFT + OTSU | 0.5628 | 0.5426 | 0.7912 | 0.5858 | 0.4352 | 0.0331 | 3.5898 |
| BGGMM | 0.3075 | 0.3076 | 0.7480 | 0.2987 | 0.2071 | 0.2329 | 23.3065 |
| FRGMM | 0.3708 | 0.3387 | 0.8475 | 0.4109 | 0.3150 | 0.1253 | 12.5406 |
| Kernel_GraphCuts | 0.1178 | 0.2345 | 0.7198 | 0.1782 | 0.1156 | 0.2259 | 23.9979 |
| ROISEG | 0.1042 | 0.3509 | 0.2804 | 0.0906 | 0.0656 | 0.1219 | 11.8211 |
| Our method | 0.7164 | 0.6327 | 0.7968 | 0.5162 | 0.4479 | 0.0355 | 7.1233 |