| Literature DB >> 35013443 |
Andong Wang1, Qi Zhang1, Yang Han1, Sean Megason2, Sahand Hormoz2, Kishore R Mosaliganti2, Jacqueline C K Lam3, Victor O K Li4.
Abstract
Cell segmentation plays a crucial role in understanding, diagnosing, and treating diseases. Despite the recent success of deep learning-based cell segmentation methods, it remains challenging to accurately segment densely packed cells in 3D cell membrane images. Existing approaches also require fine-tuning multiple manually selected hyperparameters on the new datasets. We develop a deep learning-based 3D cell segmentation pipeline, 3DCellSeg, to address these challenges. Compared to the existing methods, our approach carries the following novelties: (1) a robust two-stage pipeline, requiring only one hyperparameter; (2) a light-weight deep convolutional neural network (3DCellSegNet) to efficiently output voxel-wise masks; (3) a custom loss function (3DCellSeg Loss) to tackle the clumped cell problem; and (4) an efficient touching area-based clustering algorithm (TASCAN) to separate 3D cells from the foreground masks. Cell segmentation experiments conducted on four different cell datasets show that 3DCellSeg outperforms the baseline models on the ATAS (plant), HMS (animal), and LRP (plant) datasets with an overall accuracy of 95.6%, 76.4%, and 74.7%, respectively, while achieving an accuracy comparable to the baselines on the Ovules (plant) dataset with an overall accuracy of 82.2%. Ablation studies show that the individual improvements in accuracy is attributable to 3DCellSegNet, 3DCellSeg Loss, and TASCAN, with the 3DCellSeg demonstrating robustness across different datasets and cell shapes. Our results suggest that 3DCellSeg can serve a powerful biomedical and clinical tool, such as histo-pathological image analysis, for cancer diagnosis and grading.Entities:
Mesh:
Year: 2022 PMID: 35013443 PMCID: PMC8748745 DOI: 10.1038/s41598-021-04048-3
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
An overview of the existing deep learning models for 2D/3D cell segmentation.
| Segmentation type | Deep learning model for 2D cell segmentation | Deep learning model for 3D cell segmentation | Major drawback | |
|---|---|---|---|---|
| Semantic segmentation | U-Net[ | 3D U-net[ | Fails to distinguish different cell instances | |
| Instance segmentation | Contour-aware approach | DCAN[ | U-Net + CRF[ | Performance is highly dependent on the manually selected parameters during the post-processing procedures Prone to fuse cells that are tightly adhered |
| Object-detection-based | Retinanet[ | Retina-Unet[ | Suffers from a severe imbalance between the number of positive and negative anchor boxes May fail to discern objects that are poorly approximated with bounding boxes | |
| Other strategies | GAN[ | StarDist 3D[ | Less accurate than the previous two mainstream strategies Many of these models are based on specific assumptions The training process of GAN networks is highly complex, especially on 3D datasets | |
Figure 13DCellSeg: A two-stage light-weight, fast, and robust pipeline for 3D cell segmentation. [Note: There are two stages in the pipeline. The first stage is a semantic segmentation, where the input is a 3D cell membrane image and the output consists of three masks, which indicate whether a voxel is the cell foreground, membrane, or background. The second stage is an instance segmentation performed on the basis of these three masks. The cellular images and segmentation results were generated by Python Matplotlib (https://matplotlib.org) using the HMS dataset].
Key novelties of 3DCellSeg.
| Aspect | Novelty |
|---|---|
| Network | Based on the characteristics of cell membrane images, a light-weight network, 3DCellSeg, is designed to yield a fast inference speed while achieving an accuracy comparable or superior to the existing cutting-edge approaches |
| Loss function | A new loss function, 3DCellSeg Loss, is proposed to tackle the clumped cell problem |
| Post-processing | Inspired by DBSCAN (Density-based Spatial Clustering of Applications with Noise)[ |
| Model usability | 3DCellSeg pipeline is robust, easy to fine-tune, and outperforms existing cutting-edge methods across different experimental datasets |
Figure 2Model comparison and representative slices. [Note (a) shows the accuracies of different cell segmentation models for the HMS dataset. 3DCellSeg achieves the second best accuracy in ARE, VOIsplit, and VOImerge, and achieves the best accuracy in Avg JI, JI > 70%, and JI > 50% (the plots for DSC-related metrics are of high similarity to JI-related metrics). (b) and (c) show representative slices of different model segments. ACME tends to under-segment (see the dark green region which mis-classifies different cells as one cell) while U-Net + SWS tends to over-segment (see the over-segmented small cells in the central region). PanopticFCN, Mask R-CNN FPN, and Mask R-CNN C4 are accurate on the HMS dataset but they are severely under-segment on the ATAS dataset. The cellular images in (b) and (c) were generated by Python Matplotlib (https://matplotlib.org) using the HMS and ATAS[49] datasets].
Figure 33DCellSeg performance on the ATAS, LRP, and Ovules datasets. [Note: Different cell instances were randomly assigned different colors. The LRP dataset images are annotated: the yellow circle shows where 3DCellSeg has made a mistake and the green circle shows that 3DCellSeg can segment cells that were not labelled in the ground truth. The cellular images were generated by Python Matplotlib (https://matplotlib.org) using the ATAS[49], LRP[50], and Ovules[51] datasets].
Comparison of model performance on the HMS, ATAS, LRP, and Ovules datasets.
[Note: For the HMS dataset, U-Net + SWS, U-Net + GASP, U-Net + MultiCut, and U-Net + MutexWS were retrained on default hyperparameters, and compared with our 3DCellSeg. For the ATAS dataset, U-Net + SWS, which was originally developed, trained and fine-tuned on the ATAS dataset, was compared with 3DCellSeg. For the LRP and Ovules datasets, U-Net + GASP, U-Net + MultiCut, and U-Net + MutexWS , which were originally built, trained and fine-tuned on LRP and Ovules, were compared with 3DCellSeg. Object-detection based instance segmentation methods (PanopticFCN, Mask R-CNN FPN, and Mask R-CNN C4) trained on 2D slices of the HMS, ATAS, LRP, and Ovules datasets were also taken as baselines for model comparison. ARE, VOIsplit, VOImerge, JI-related, and DSC-related metrics were calculated on 3D space for 3DCellSeg, ACME, U-Net + SWS, U-Net + GASP, U-Net + MultiCut, and U-Net + MutexWS and were calculated on 2D slices for PanopticFCN, Mask R-CNN FPN, and Mask R-CNN C4].
Ablation studies showing 3DCellSeg Loss, 3DCellSegNet, TASCAN, and Transfer Learning.
Figure 4The structure of 3DCellSegNet. [Note: The extra voxels on the edge of the feature maps are removed after each deconvolution operation, in order to ensure the size of the up-sampled feature map is identical with that of the corresponding down-sampled feature map].
Figure 5Addressing the clumped cell problem using 3DCellSeg Loss. [Note (a) vs at different values shows how different affects the replacement . (b) A 2D slice of a simulation illustrating the difference between Dice Loss and 3DCellSeg Loss . The simulation results in (b) were generated by Python Matplotlib (https://matplotlib.org)].
Figure 6TASCAN algorithm for cell clustering.