| Literature DB >> 33238566 |
David Rivas-Villar1,2, José Rouco1,2, Manuel G Penedo1,2, Rafael Carballeira3, Jorge Novo1,2.
Abstract
Water safety and quality can be compromised by the proliferation of toxin-producing phytoplankton species, requiring continuous monitoring of water sources. This analysis involves the identification and counting of these species which requires broad experience and knowledge. The automatization of these tasks is highly desirable as it would release the experts from tedious work, eliminate subjective factors, and improve repeatability. Thus, in this preliminary work, we propose to advance towards an automatic methodology for phytoplankton analysis in digital images of water samples acquired using regular microscopes. In particular, we propose a novel and fully automatic method to detect and segment the existent phytoplankton specimens in these images using classical computer vision algorithms. The proposed method is able to correctly detect sparse colonies as single phytoplankton candidates, thanks to a novel fusion algorithm, and is able to differentiate phytoplankton specimens from other image objects in the microscope samples (like minerals, bubbles or detritus) using a machine learning based approach that exploits texture and colour features. Our preliminary experiments demonstrate that the proposed method provides satisfactory and accurate results.Entities:
Keywords: bag of visual words; colony merging; gabor filters; microscope images; phytoplankton detection
Mesh:
Year: 2020 PMID: 33238566 PMCID: PMC7700267 DOI: 10.3390/s20226704
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Main steps of the proposed methodology.
Figure 2Example of foreground-background separation. (a) Original image. (b) Binary map separating the foreground from the background.
Figure 3Example of colony merging results over particular specimens (not to scale). (a) Volvox aureus detected in separated parts. (b) Same Volvox aureus after the merging algorithm. (c) Eudorina elegans detected in separated parts. (d) Same Eudorina elegans after the merging algorithm. (e) Microcystis flos-aquae detected in separated parts. (f) Same Microcystis flos-aquae after the merging algorithm.
Figure 4Examples of Delaunay triangulations over images containing different sparse species or colonies. (a) Volvox aureus. (b) Eudorina elegans. (c) Microcystis flos-aquae.
Best Result for Each of the Tested Classifiers with Texture Features.
| Precision at 90% Recall | Precision at 95% Recall | |
|---|---|---|
| RF | 77.2% | 65.4% |
| SVM | 72.5% | 68.7% |
| kNN | 70.4% | 61.6% |
| GMM | 69.5% | 58.1% |
| BT | 71.7% | 59.9% |
Best Result for Each of the Tested Classifiers with Colour Features.
| Precision at 90% Recall | Precision at 95% Recall | |
|---|---|---|
| RF | 75.4% | 60.5% |
| SVM | 61.9% | 46.3% |
| kNN | 62.6% | 46.4% |
| GMM | 56.5% | 47.1% |
| BT | 70.9% | 62.0% |
Best Result for Each of the Tested Classifiers with Both Colour and Texture Features Combined.
| Precision at 90% Recall | Precision at 95% Recall | |
|---|---|---|
| RF | 69.5% | 65.9% |
| SVM | 24.2% | 23.9% |
| kNN | 69.3% | 49.1% |
| GMM | 31.8% | 28.1% |
| BT | 42.9% | 44.8% |
Figure 5Precision-recall curve for the best performing classifiers using only texture features.
Figure 6Precision-recall curve for the best performing classifiers using only colour features.
Figure 7Precision-recall curve for the best performing classifiers using both colour and texture features.
Figure 8Final results of the work—true positives in green, true negatives in blue, false positives in violet and false negatives in red.