| Literature DB >> 28798336 |
Sajith Kecheril Sadanandan1, Petter Ranefall1, Sylvie Le Guyader2, Carolina Wählby3.
Abstract
Deep Convolutional Neural Networks (DCNN) have recently emerged as superior for many image segmentation tasks. The DCNN performance is however heavily dependent on the availability of large amounts of problem-specific training samples. Here we show that DCNNs trained on ground truth created automatically using fluorescently labeled cells, perform similar to manual annotations.Entities:
Mesh:
Year: 2017 PMID: 28798336 PMCID: PMC5552800 DOI: 10.1038/s41598-017-07599-6
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Data, training and testing pipelines for cell segmentation. (a) A z-stack of bright-field time-lapse microscopy images captured without any staining (time = t 1 to t ). (b) A z-stack of bright-field images along with fluorescent images of nuclear and cytoplasmic stains captured after staining (time = t ). The green box represents the CellProfiler pipeline automatically generating illumination corrected images (data) and the segmented images (labels) for DCNN training. The data and the label are subjected to data augmentation to create the final dataset for training the DCNN. See Supplementary Fig. 2 for the detailed DCNN architecture. (c) The complete cell segmentation pipeline in CellProfiler (green box) that receives z-stack bright-field input images and outputs segmented cell images.
Figure 2Segmentation evaluation pipelines. (a) In experiment 1, the previously un-seen bright-field channel of the test image was fed to the CellProfiler segmentation pipeline containing the trained DCNN. The same bright-field channel was manually annotated to create manual ground truth (‘mangt’) for comparison. The parallel fluorescent channels corresponding to the bright-field channel was fed to the CellProfiler training pipeline to create automatic ground truths for evaluation (‘cpgt’). Note that the ground truth generated for evaluation was not used for training. The three plots show the percentage of cells segmented with an F-score greater than or equal to the F-score value shown in the horizontal axis. ‘w1w1_cpgt’ corresponds to the result obtained when the DCNN was trained on eight sites of well 1 and tested on the ninth site of well 1 with CellProfiler created ground truth (blue line plot), while ‘w1w1_mangt’ corresponds to a comparison with with manual ground truth (magenta line plot). The plot shows that 88% of the cells were segmented with an F-score ≥ 0.6 when compared with automatic ground truth while 85% of the cells were segmented with an F-score ≥ 0.6 when compared with manual ground truth. The second plot shows the result when the DCNN was trained on well 1 and tested on well 2. The magenta plot shows the comparison done on the single manually annotated image site while the blue box plot shows the result when compared with all the nine image sites in well 2. Similarly, the third plot corresponds to the result when the DCNN was trained on well 1 and tested on well 3. (b) In experiment 2, a four-channel input image and the nuclei channel were used to create the ground truth for training the DCNN. The trained DCNN was used to segment the four-channel test image. The result of the segmentation from the CellProfiler pipelines for creating ground truth and the segmentation pipeline using DCNN is overlaid on the nuclei image. The quantitative evaluation shows the percentage of cells greater than or equal to a particular threshold v/s the corresponding F-score values. The box plot shows when the comparison was done for 58 images. The median value shows that around 81% of cells were segmented with F-score ≥ 0.6.