| Literature DB >> 34893817 |
Herbert Kruitbosch1, Yasmin Mzayek1, Sara Omlor1, Paolo Guerra2, Andreas Milias-Argeitis2.
Abstract
MOTIVATION: Single-cell time-lapse microscopy is a ubiquitous tool for studying the dynamics of complex cellular processes. While imaging can be automated to generate very large volumes of data, the processing of the resulting movies to extract high-quality single-cell information remains a challenging task. The development of software tools that automatically identify and track cells is essential for realizing the full potential of time-lapse microscopy data. Convolutional neural networks (CNNs) are ideally suited for such applications, but require great amounts of manually annotated data for training, a time-consuming and tedious process.Entities:
Year: 2021 PMID: 34893817 PMCID: PMC8825468 DOI: 10.1093/bioinformatics/btab835
Source DB: PubMed Journal: Bioinformatics ISSN: 1367-4803 Impact factor: 6.937
Fig. 1.Actual versus synthesized microscopy images. (A; left) Brightfield image of budding yeast cells grown inside a microfluidic device with pillar-like rectangular structures (Lee ). Cells get trapped and grow underneath these structures, while growth medium flows continuously through the device. Small cells, as well as cells that get dislodged from underneath the pillars, get washed away. (Right) Brightfield image of yeast cells growing inside a microfluidic device without microstructures. This image was taken from test set 3 (TS3) of the YIT. In both panels, image contrast was adjusted to improve their appearance in this figure. (B) Two synthetically generated images from the dataset used to train our Mask R-CNN. Each image contained 100 cell-like objects placed at random positions which allow overlap. The image background also contained the pillar-like structures of our microfluidic device. Cell-like objects were placed over the whole image area, to help the Mask R-CNN identify cells over the whole image and not only at specific locations. On each synthetic image, the pixels belonging to each cell-like object were annotated and used to train the Mask R-CNN for instance segmentation
Segmentation performance metrics for Mask R-CNN, YeaZ and YeastNet2 evaluated on the brightfield test sets of the YIT
|
|
Note: Highlighted cells denote the tool with the highest performance for each test set.
Tracking performance metrics for Mask R-CNN, YeaZ and YeastNet2 evaluated on the brightfield test sets of the YIT
|
|
Note: Highlighted cells denote the tool with the highest performance for each test set.
Fig. 2.Segmentation of yeast cells in different imaging setups. (A) Large, dense colony growing under a nutrient-infused agarose pad in our imaging setup. Cells in the center of the colony have been pushed vertically and are largely out of focus. Despite the large amount of crowding, our Mask R-CNN was able to accurately detect the majority of cells, even though it was not trained on such dense images. Objects detected by the neural network are marked with magenta outlines. Cells that were not detected do not carry an outline. (B) Cells growing inside the microfluidic device used in Uhlendorf . (C) Cells growing inside the microfluidic device used in our group. In such sparse cell configurations, our Mask R-CNN is able to detect a wide range of cell sizes, from large, aged mother cells, to young growing buds. (D) Inset showing close-up views of cell boundaries detected by the Mask R-CNN. In all panels, contrast was adjusted to improve their appearance in this figure; no contrast adjustments were made to the images that were provided to the Mask R-CNN
Average IoU of true positive instances in the annotated brightfield images of wild-type cells from the YeaZ dataset
| Test set | wtF2BF | wtF3BF | wtF4BF | wtF5BF | wtF6BF |
|---|---|---|---|---|---|
| Mask R-CNN | 81.6 | 82.0 | 84.6 | 84.5 | 81.2 |
| YeaZ | 85.8 | 85.3 | 87.2 | 88.0 | 84.2 |
| Test set | wtF7BF | wtF8BF | wtF9BF | wtF10BF | wtF11BF |
| Mask R-CNN | 81.7 | 78.7 | 75.7 | 79.8 | 84.5 |
| YeaZ | 81.8 | 82.9 | 80.2 | 85.3 | 85.8 |
| Test set | wtF12BF | wtF13BF | wtF14BF | wtF15BF | |
| Mask R-CNN | 78.9 | 84.0 | 83.4 | 83.8 | |
| YeaZ | 84.2 | 86.0 | 86.4 | 86.0 |
Note: The YeaZ dataset contains images obtained at six different exposure levels. For all the tests performed here, the lowest exposure level was used. The two models were run using the same threshold values as in the YIT tests.