| Literature DB >> 30285608 |
Yousef Al-Kofahi1, Alla Zaltsman2, Robert Graves2, Will Marshall2, Mirabela Rusu3,4.
Abstract
BACKGROUND: Automatic and reliable characterization of cells in cell cultures is key to several applications such as cancer research and drug discovery. Given the recent advances in light microscopy and the need for accurate and high-throughput analysis of cells, automated algorithms have been developed for segmenting and analyzing the cells in microscopy images. Nevertheless, accurate, generic and robust whole-cell segmentation is still a persisting need to precisely quantify its morphological properties, phenotypes and sub-cellular dynamics.Entities:
Keywords: 2-D cells segmentation; Deep learning; Microscopy images; Watershed segmentation
Mesh:
Year: 2018 PMID: 30285608 PMCID: PMC6171227 DOI: 10.1186/s12859-018-2375-z
Source DB: PubMed Journal: BMC Bioinformatics ISSN: 1471-2105 Impact factor: 3.169
Fig. 1Various channel markers allow the visualization of cells a,b dsRed, c TexasRed, and d Cy5. A large variability exists in the appearance of cells, based on the utilized marker and magnification
Fig. 2Overview of the 2-D cell segmentation algorithm. Labeled images are used as training set for deep learning. The unseen images are passed through the inference engine to create the probability maps for the nuclear seeds and cytoplasm. Multiple steps are required for the nuclear seed prediction and the cell segmentation
The used U-net architecture
| L# | Type | Size | Output | L# | Type | Size | Output |
|---|---|---|---|---|---|---|---|
| 1 | Input | 1,160,160 | 17 | Concatenate | 256,20,20 | ||
| 2 | Convolution | 32 filters | 32,160,160 | 18 | Dropout | 50% | 256,20,20 |
| 3 | Max pool | 2 stride 2x2 | 32,80,80 | 19 | Convolution | 128 filters | 128,20,20 |
| 4 | Convolution | 64 filters | 64,80,80 | 20 | Deconvolution | 2 stride, 128x2x2 | 128,40,40 |
| 5 | Max pool | 2 stride 2x2 | 64,40,40 | 21 | Convolution | 128 filters | 128,40,40 |
| 6 | Convolution | 128 filters | 128,40,40 | 22 | Concatenate | 192,40,40 | |
| 7 | Max pool | 2 stride 2x2 | 128,20,20 | 23 | Dropout | 50% | 192,40,40 |
| 8 | Convolution | 128 filters | 128,20,20 | 24 | Convolution | 128 filters | 128,40,40 |
| 9 | Max pool | 2 stride 2x2 | 128,10,10 | 25 | Deconvolution | 2 stride, 128x2x2 | 128,80,80 |
| 10 | Convolution | 256 filters | 256,10,10 | 26 | Convolution | 128 filters | 128,80,80 |
| 11 | Max pool | 2 stride 2x2 | 256,5,5 | 27 | Concatenate | 160,80,80 | |
| 12 | Dropout | 50% | 256,5,5 | 28 | Concatenate | 160,80,80 | |
| 13 | Deconvolution | 2 stride, 256x2x2 | 256,10,10 | 29 | Convolution | 64 filters | 64,80,80 |
| 14 | Convolution | 128 filters | 128,10,10 | 30 | Deconvolution | 2 stride, 128x2x2 | 64,160,160 |
| 15 | Deconvolution | 2 stride, 128x2x2 | 128,20,20 | 31 | Convolution | 64 filters | 64,160,160 |
| 16 | Convolution | 128 filters | 128,20,20 | 32 | Output | 3,160,160 |
Fig. 3Prediction and segmentation step-by-step outcome. a Input image. b Nuclei (Yellow-Red) and Cells (Blue-Cyan) prediction map. c Segmented Nuclei (seeds), d Segmented Cells
Summary of datasets used for the training and testing of the deep learning framework
| Data set | Training or testing | Image no | Marker channel | ||
|---|---|---|---|---|---|
| Experiment 1 | Experiment 2 | Experiment 3 | |||
| 1 | Training | Training, Testing | Training | 22 | Green-dsRed, Red-Cy5 |
| 2 | Training | Training, Testing | Training | 12 | Green-dsRed |
| 3 | Training | Training, Testing | Training | 24 | Red-Cy5 |
| 4 | Training | Training, Testing | Training | 30 | TexasRed-TexasRed |
| 5 | 10 Training, 10 Testing | Training, Testing | Training | 20 | Green-dsRed, Red-Cy5 |
| 6 | Testing | 15 | TexasRed-TexasRed | ||
Each images has a 2048 x 2048 resolution
Fig. 4Examples of segmentation results from Experiment 1. a-c Different stains and cell cultures. Right column: segmentation results using our deep learning-based approach. Left column: semi-automated ground truth segmentation. Bottom row shows close-ups of the area in the white box. Different cell contours are shown in different colors
Fig. 5a Experiment 1: Histogram of the cell-level quality scores, for a total of 1666 segmented cells. The overall (average) quality score is ∼0.87. b Experiment 2: Receiver Operating Curve for a 10-fold cross validation the proposed approach
Experiment 1: Image level segmentation comparisons
| Image ID | Deep learning to two-channel similarity | Deep learning to ground truth similarity | Two-channel to ground truth similarity |
|---|---|---|---|
| 1 | 0.88 | 0.90 | 0.94 |
| 2 | 0.86 | 0.85 | 0.94 |
| 3 | 0.89 | 0.91 | 0.94 |
| 4 | 0.92 | 0.91 | 0.91 |
| 5 | 0.88 | 0.90 | 0.93 |
| 6 | 0.83 | 0.84 | 0.94 |
| 7 | 0.76 | 0.80 | 0.87 |
| 8 | 0.72 | 0.72 | 0.96 |
| 9 | 0.83 | 0.86 | 0.94 |
| 10 | 0.89 | 0.90 | 0.95 |
| Avg. | 0.85 | 0.86 | 0.93 |
Experiment 2 - Summary of segmentation similarity - SM (accuracy) for the 10-fold cross-validation
| Data set | Number of detected cells | Segmentation SM (accuracy) |
|---|---|---|
| 1,5 | 6378 | 0.86 ±0.14 |
| 2 | 2162 | 0.62 ±0.09 |
| 3 | 2735 | 0.91 ±0.12 |
| 4 | 7961 | 0.85 ±0.17 |
| Total cell no = 19236 | Avg. Accuracy = 0.84 ±0.14 |
Experiment 3: Image level segmentation comparisons
| Image ID | Deep learning to two-channel similarity | Image ID | Deep learning to two-channel similarity |
|---|---|---|---|
| 1 | 0.88 | 9 | 0.84 |
| 2 | 0.83 | 10 | 0.80 |
| 3 | 0.82 | 11 | 0.86 |
| 4 | 0.81 | 12 | 0.85 |
| 5 | 0.84 | 13 | 0.85 |
| 6 | 0.76 | 14 | 0.88 |
| 7 | 0.85 | 15 | 0.79 |
| 8 | 0.87 | Avg. | 0.84 |
Fig. 6Segmentation examples from Experiment 3. Right column: segmentation results using our deep learning-based approach. Left column: semi-automated ground truth segmentation. Bottom row shows close-ups of the area in the white box. Different cell contours are shown in different colors