Literature DB >> 36107930

Nuclei segmentation of HE stained histopathological images based on feature global delivery connection network.

Peng Shi1,2, Jing Zhong3, Liyan Lin4, Lin Lin5, Huachang Li1,2, Chongshu Wu1,2.   

Abstract

The analysis of pathological images, such as cell counting and nuclear morphological measurement, is an essential part in clinical histopathology researches. Due to the diversity of uncertain cell boundaries after staining, automated nuclei segmentation of Hematoxylin-Eosin (HE) stained pathological images remains challenging. Although better performances could be achieved than most of classic image processing methods do, manual labeling is still necessary in a majority of current machine learning based segmentation strategies, which restricts further improvements of efficiency and accuracy. Aiming at the requirements of stable and efficient high-throughput pathological image analysis, an automated Feature Global Delivery Connection Network (FGDC-net) is proposed for nuclei segmentation of HE stained images. Firstly, training sample patches and their corresponding asymmetric labels are automatically generated based on a Full Mixup strategy from RGB to HSV color space. Secondly, in order to add connections between adjacent layers and achieve the purpose of feature selection, FGDC module is designed by removing the jumping connections between codecs commonly used in UNet-based image segmentation networks, which learns the relationships between channels in each layer and pass information selectively. Finally, a dynamic training strategy based on mixed loss is used to increase the generalization capability of the model by flexible epochs. The proposed improvements were verified by the ablation experiments on multiple open databases and own clinical meningioma dataset. Experimental results on multiple datasets showed that FGDC-net could effectively improve the segmentation performances of HE stained pathological images without manual interventions, and provide valuable references for clinical pathological analysis.

Entities:  

Mesh:

Substances:

Year:  2022        PMID: 36107930      PMCID: PMC9477331          DOI: 10.1371/journal.pone.0273682

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

As one of the golden standards in clinic, analysis of stained tissue section images plays an important role in histopathological diagnosis [1]. Hematoxylin-Eosin (HE) staining is the most commonly used techniques in dealing with pathological paraffin sections, especially in analysis of tumor tissue microscopic images [1, 2], in which the nucleus is stained hyacinthine by alkaline hematoxylin, while the cytoplasm is stained red by acidic eosin. As the characteristic changes of normal cells after cancerization are mostly reflected in nuclei, the statistical results of the number, size and morphology of nuclei and other indicators can be used for cancer grading [3], which is critical for the formulation of treatment plans for patients [4]. In most of pathological image processing workflows, cell segmentation and quantification are necessary steps for precise cell structures and distributions, and eventually quantify statistical results for final identification [5]. However, HE stained images always have no obvious color differences or clear borders between different parts of numerous cells, which causes difficulties in manual or automated pathological image analysis in practice [6]. Meanwhile, colors of HE stained images may have differences between various acquisition conditions such as time, light and contrast [7], and a variety of diseases, organs, positions, and stages also have quite different cell morphologies in acquired pathological images [8]. Therefore, construction of the common and representative feature space of cell morphologies in an efficient way becomes the main concern of automated HE stained image segmentation. As shown in Fig 1, the HE-stained pathological images vary a lot in terms of organs and imaging qualities, which need a general automatic segmentation strategy in practice. Many previous cell segmentations methods were based on traditional image processing, such as threshold determination [10], contour evolution model construction [11], and seed point marking [12], which located cell boundaries by iterative operation of pre-set features to certain terminal conditions. With the development of machine learning, classic learning-based methods such as K-means clustering [13], fuzzy C-means clustering [14] and Support Vector Machine (SVM) [15] have been applied to image segmentation, in which pixels or small patches are classified into different categories. Due to the diversity of nuclear morphology, the inhomogeneity of staining and the variability of dye quality, machine learning based methods always rely on well-designed local feature sets, and are difficult to include high representative features and lack of neighborhood receptive fields. Meanwhile, manual labeling is essential in supervised learning methods, which cannot meet efficiency requirements in clinic, and for unsupervised learning, the stability of automatic pixelwise samples selection is still a challenging step in training due to diversities discussed above.
Fig 1

Sample HE stained images from different organs in MoNuSeg [9] dataset, in which high quality stained images are shown in the first row, and the second row contains low quality samples.

With its high precision and automatic extraction of deep features, Convolutional Neural Networks (CNNs) are widely used in current image segmentation including pathological analysis. Based on the main structure of CNN, Fully Convolutional Network (FCN) [16] uses deconvolution layers to upsample the convolution feature image, and restore it to the same size of the input image to predict the category of each pixel and complete the segmentation. As the improvement of FCN, UNet [17] includes more high resolution and classification features produced in convolutions as supplements to the upsampling directly, which highly improves the resolutions in the image restoration stage. To enhance the feature expression abilities of pathological image, researchers proposed multiple ways including the introduction of residual module, a multi-scale feature extraction module, attention mechanism, and the multi-model combination way. Li et al. [18] added a cascade residual fusion module in the decoder stage of UNet to improve detection performance in the decoding process. Zeng et al. [19] proposed RIC-UNET by adding optimization methods such as residual module, multi-scale perception and attention mechanism to segment the nucleus more accurately. Pan et al. [20] proposed a cavity depth separable convolution AS-UNet, in which the cavity convolution module was combined by cascade and parallelism, which could extract and combine multi-scale features and had better perception ability for larger or smaller nuclei. Wan et al. [21] used an improved vacuum-pyramid pooling UNet (ASPP-UNet) to capture multi-scale nuclear features and obtain their context information without reducing the spatial resolution of feature maps. Saha et al. [22] also added spatial pyramid pooling and trapezoidal long and short-term memory modules into UNet network to obtain Her2Net to retain more encoder information in decoder. Some researchers used multi-branching and multi-model stacking methods to improve nuclear segmentation performance based on UNet structure. Navid et al. [23] proposed a spatial awareness network (SpaNet) to capture spatial information in a multi-scale manner. Double-headed and single-headed structures were designed to predict the nucleus pixel and its centroid. Zhao et al. [24] decomposed HE stained images and constructed a triple UNet network with RGB branches, HE branches and segmentation branches. The features extracted from RGB and HE branches were then fused to the segmentation branch to learn better representations. Kang et al. [25] designed a two-stage learning framework by stacking two UNets, and added nuclear boundary prediction to transform the original binary segmentation task into a two-step task, in which the first step was to estimate the kernel and its rough boundary, and the second step was to output the final fine segmentation result. In addition, Pan et al. [26], in the stage of training set making, adopted the sparse reconstruction method to initially remove the background and highlight the nuclear region. Considering that the nucleus is stained blue and purple, the cytoplasm is stained red, and the unstained Extra Cellular Space (ECS) appears white, the difficulty of pathological image segmentation lies not in the recognition of the boundary between the cell body and the ECS, but in the recognition of the boundary between the nucleus and the cytoplasm. Therefore, the segmentation model design should be optimized to fully consider the recognition of the nucleus boundaries. In addition, the convolution is difficult to obtain global features of the input image due to its limited receptive field, and the relationship between feature channels of the same layer cannot be obtained due to its singleness and locality. Therefore, proper extension of receptive fields and information transfer between adjacent layers needs to be considered in designing network architectures. In this paper, we propose a fully automated processing pipeline to analyze HE stained images without human intervention. The main contribution of our unsupervised approach includes a Full Mixup training sample generation strategy based on asymmetric labels and HSV color transform, a dynamic training workflow and a newly designed FGDC network architecture with proper extension of receptive fields and information transfer between adjacent layers, which identifies the boundaries between the nucleus and cytoplasm with high accuracy and effectiveness. The rest of the paper is organized as follows. In Section 2, we firstly describe the HE stained images from separate sources, and then the training sample and label generation, modules design, and dynamic training of FGDC-net. In Section 3, comparative experimental results are presented to show the improvements of the proposed methods which covers both quantitative and qualitative evaluations on multiple datasets. Finally, conclusions and further improvements are discussed in the last Section.

Materials and methods

Public and own clinical datasets

To deal with the diversities of HE stained images more efficiently and robustly, three datasets were used in the training and testing of the proposed framework, which include open two multi-organ datasets of Kumar [8] and MoNuSeg [9], and HE stained images of meningioma from our clinical research [27].

Kumar and MoNuSeg datasets

Two datasets were publicly released for testing algorithms that accurately segment nuclei, in which the Kumar dataset consisted 30 HE stained pathological images acquired from 18 different hospitals, and the MoNuSeg dataset was downloaded from Medical Image Computing and Computer Assisted Intervention (MICCAI) 2018 Multi-Organ Pathological Image Nuclear Segmentation Challenge. Distribution of each organ and division in training of the two datasets are shown in Table 1.
Table 1

Sample images from multiple datasets for image segmentation.

BreastLiverKidneyProstateBladderColonStomachBrainLungMeningioma
HighLow
KumarTrain4444000----
Test2222222----
MoNuSegTrain666622200--
Test203221022--
OursTrain---------2020
Test---------1010

Own clinical meningiomas dataset

As the second highest brain cancer risk, the overall incidence of meningioma increased by 4.6% annually in 2004–2009, and remained stable from then [28]. World Health Organization (WHO) provides a three-point overall cancer risk scale including (I) benign, (II) atypic, and (III) anaplastic or malignant meningioma, in which both the level II and III are collectively known as high-grade meningioma [29]. The tissue samples collected in the experiments were histological sections of high grade and low grade meningiomas, were all from clinical cases of Fujian Medical University Union Hospital (Ethical approval No. 2019KJTYL024). In the dyeing image acquisition stage, the macroscopic tissue sections were segmented to obtain microscopic tissue sections, and HE staining was performed using standard histological methods. In order to ensure the comprehensiveness and diversity of the original images, images of multiple cases from clinic were included for the training process. First, there were totally 60 cases of meningiomas blindly chosen on a patient level, in which 30 cases are from high-grade and other 30 cases are low-grade. Second, one original image was randomly selected from each case, so we had 60 original images in our own dataset. Then, 10 images from each grade of cases were randomly selected as the test set, and there were totally 20 images for test. Finally, the remaining 40 images including half high-grade and half low-grade images formed the training set. The ratio between the training and test datasets is 2:1, and the image quantity distribution of the final dataset is also shown in Table 1.

Golden standards generation of labeling

In practice, one 1536×2048 meningioma pathological image averagely contains about 800 nuclei that need to be labeled, so a total of about 16,000 nuclei needs to be labeled in 20 training images. In clinical medical analysis, MaZda [30] software (V4.6) is often used to label nuclei inside the Regions of Interest (ROI) in pathological images. After labeling, the labeling results were sent to pathologist to evaluate the quality of labeling and feedback those unreasonable labeling for correction. The correction-feedback process continued until the error rate of mistaken labeled nuclear pixels in each image was less than 5%, which was an empirical value suggested by our pathologists and made the labeling results dependable as the golden standards for the following training and validations.

Automatic generation of training sample patches and labels

Binary image label maps are essential for the training of image segmentation networks, in which small patches containing nuclei are randomly selected from original images to generate their corresponding pseudo labels. In our previous research, we have proposed a reliable training sample generation method in an unsupervised manner based on K-means clustering results, and a plain Full Mixup strategy to enhance the training sets by adding up two patches and their label maps [27].

Construction of patches and corresponding pseudo labels

Nevertheless, considering that the image patches of the training sample is too small to have enough local receptive fields, it is difficult to determine the category of pixels when a selected image patch belongs to the same type of cell structure. Simply increasing the size of the patches will also greatly reduce the number of samples that can be selected. To solve this, the receptive field of training sample patches are enlarged by using the padding operation based on the original size, and the size of label map remains unchanged. As shown in Fig 2, eligible label patches are firstly selected, which are marked as green boxes in the binary pseudo label map generated by clustering of original image pixels, where three colors in the clustering map represent different tissue structures including nuclei, cytoplasm and ECS. Then the expanded image blocks after padding are captured as training samples, which are marked as red boxes in the original image. The padding factor is set as based on the label patch size of N×N, and there is a certain size difference between the training sample and its label patch. Since the training process is guided by the labels, the asymmetric model of FGDC-net focuses its attention on the central regions of the sample corresponding to its label, which plays a certain effect similar to the attention mechanism in effect.
Fig 2

Construction of sample patches and corresponding pseudo labels for training.

Mixup of image patches in HSV color spaces

Since the training sets we constructed are composed of the most well-stained nuclei from different images, the lack of lightly stained nuclei results in weak generalization ability in dealing with different source datasets. In our previous work, a plain Full Mixup strategy was used to solve this by merging images with different weights and directly superimposes the labels of the images. However, when the images come from different hospitals with different dyeing conditions, a large variation of images usually cause big differences between the mixed images and normal pathological images. For example, images of column one and two are from different sources and the mixed image has very light blue nuclei in the zoomed ROI of mixed image patch in column three, which may cause missing detection in the following segmentation. Common normalization cannot solve this problem as shown in the right-most histograms of blue pixels of nuclei, which have quite similar mixing result as the originals. HSV color space [31] is composed according to the intuitive features of colors, Hue, Saturation and Value. Unlike three channels in RGB color space, HSV only uses Hue to control the variety of colors. Therefore, the range of color distribution changes in HSV space is much smaller than that in RGB space after performing the mixing operation. After normalization, the RGB images are firstly converted into HSV space as illustrated in the third row and then the blended HSV image is transformed back to RGB as the zoomed image in the same row shows. As illustrated in the distribution histograms of blue color of nuclei, more concentrated distribution of blue nuclei pixels can be found in mixed HSV-based image, suggesting the HSV-based Full Mixup generates high quality image patches for training the network. Comparisons between different mixing strategies are shown in Fig 3.
Fig 3

Comparison between mixing based on RGB and HSV space.

Structure of feature global Delivery Connection Network

After the generation of training samples and labels, the segmentation network structure is also improved with the idea of attention mechanism. As shown in Fig 4, an asymmetric segmentation network named FGDC-net is proposed based on the size of automatically captured image sample patches of training set. As discussed above, a training patch is four times larger than its corresponding label map. Therefore, by getting through FGDC-net, the output of segmentation result only contains the central region of the input image patch, which has the same size as its corresponding pseudo label. The sizes of input (amber) and output (green) layers of FGDC-net are not equal, so as to increase the local receptive field of the central region to be segmented. In addition, FGDC-net abandons the jumping connections between codecs commonly used in UNet structures and uses FGDC modules instead to learn the relationships between feature channels at each layer and pass information selectively, which are illustrated as arrows of information flows in Fig 4.
Fig 4

Workflow of image segmentation through FGDC-net.

FGDC module

The convolution operation is difficult to obtain the global features of the image because of its limited receptive field, and the relationship between feature channels of the same layer cannot be obtained due to its singleness and locality. In order to compensate for the disadvantages of convolution operation and increase the connections between adjacent layers, FGDC module is designed to use sigmoid-like gating to assign weights to intra-layer feature channels of each layer to achieve feature screening, and Fig 5 illustrate the implementation details.
Fig 5

Information transmission flows in FGDC modules.

In Fig 5, three continuous FGDC modules in a fragment of the network are marked as dark and light blue blocks. Assuming Fl is the input feature map with size of, and S is the output of FGDC module in the l-th layer, the average pooling (AP) and input information of the l-th layer i are calculated as below. where and are feature image and the average pooling of c-th channel in the l-th layer respectively, so is the total average pooling of the whole l-th layer. * means convolution, K is a 1× 1 convolution kennel in which d is the number of kennels, b is the bias and σ is a sigmoid activation function. The gating calculation based on S transmitted from the l−1th to l-th layer is as follows. It needs to be informed that d = 2C in encoding, and in decoding, and the output of l-th layer S is 1 × 1 × C in size, which is the product of g and i after gating. As the information transmission flows show, FGDC module firstly conducts gated mapping on the output of the upper layer S to obtain the weight of the dimension matching with current layer while screening the features. Secondly, the feature map F of the local layer is integrated and mapped to obtain the information between neighborhood layers i. Then, the upper layer information and the local layer information are integrated as the input into ResBlock module, which contains feature weights of the l-th layer S, and is used as the control information of the feature results obtained by convolution of this layer.

ResBlock module

As shown in Fig 6, the ResBlock module [32] integrates output information S of FGDC module with feature map F extracted from the l-th layer. Calculations of residuals are conducted, where represents the residual result and is the output of the ResBlock module.
Fig 6

Details of ResBlock module.

It should be noted that and are different convolution kernels with size of 3×3, which are used in two different convolution processes in Eq (5).

Optimization of training strategy

Except for above improvements in build the training set and the network structure, the training strategy of the proposed network is also optimized in two main aspects, mixed loss functions and dynamic training with flexible epochs.

Combined loss functions

In order to be suitable for those smaller intact or partial nuclei in the training sets of image patches, the Binary Cross Entropy (BCE) Loss and Dice loss functions are combined as the optimization objective, which are defined as follows. where in which N is the total batch size and n is a n-th batch in the training phase, l is one training sample in each batch, y is its corresponding label, and x is the predicted result of category belonging to, which is within the range between 0 and 1 as non-nuclei or nuclei areas.

Dynamic training

To further enhance the generalization ability of the model, a dynamic training strategy with flexible epoch was proposed previously in our research, in which the algorithm dynamically modified the probability of Full Mixup by using the feedback of both Jaccard Similarity (JS) and the Dice Coefficient (DC) indexes of the validation set, and the number of epochs was determined by the increase of the probability of Full Mixup. In the dynamic training, if the prediction ability of the model for unmixed images is higher than that of mixed images, the proportion of mixed images in the training set will be enhanced by increasing the mixing probability, and vice versa. Then, due to the randomness of the mixed image, the probability of Full Mixup gradually increases to a certain threshold and the pre-defined epoch of training will be interrupted, which makes the model adaptively allocate the number of mixed samples according to the validation set index. The probability-based adjustment learns the features of both mixed and unmixed image patches, and makes the model more flexible to the diversity of inputs pathological images acquired from various conditions.

Results and discussions

Building of the training datasets

To enhance the generalization ability of the proposed network, diversified training and testing datasets from multiple resources were built and divided. For Kumar dataset, training set and test set were separated according to literature [8]. For MoNuSeg dataset, the original dataset division of the segmentation challenge was followed [9]. We also splitted the training set and testing set in a 2:1 fashion for own clinical Meningioma dataset, and the details are shown in Table 2.
Table 2

Division of original images for training and testing datasets.

SizeTrainingTesting
Kumar1000×10001614
MoNuSeg1000×10003014
Meningioma1536×20484020
In order to minimize the influence of uneven dyeing and light sources, and obtain more available training samples automatically, we randomly captured 32 sub-graphs with the size of 500×500 from each original image for the training sample selection, and allocated training sets and verification sets with the ratio of 2:1. The numbers of final selected samples from each dataset are shown in Table 3.
Table 3

Division of original images for training and testing datasets.

TotalTrainingValidationTesting
Kumar12,000+8,000+4,000+23534
MoNuSeg14,000+10,000+4,000+23534
Meningioma17,000+11,000+5,000+110500

Ablation experiments

To verify the effectiveness of model improvements and optimizations in the network training, the ablation experiments mainly included two aspects, comparisons between a batch of common segmentation methods and different levels of training optimizations. In the aspect of model structure, two main improvements need to be verified. First, the feasibility of feature global transitive connection of FGDC module was compared with jump connection of ResBlock module. Second, the effectiveness of training sample padding was verified. For the improvements in the training, there are three main optimization designs, Full Mixup in HSV space, mixed loss, and dynamic training strategy. Ablation experiments were firstly performed on the Meningioma dataset, and Table 4 shows the respective experimental results based on multiple model structures.
Table 4

Ablation experimental results of multiple network structures.

ModelPatch sizeAJIPixel-level F1
Validation of global transfer connection3-layer UNet48×480.4570 0.8237
3-layer FGDC-net48×48 0.4886 0.8123
Validation of larger sight4-layer UNet48×480.47320.7989
FGDC-net96×96 0.5058 0.8169
Three commonly used image segmentation evaluation indicators are applied in the experiments for the performance evaluation, including the pixel-level F1-score which is the general evaluation of both precision and sensitivity. True Positive (TP), False Positive (FP) and False Negative (FN) of are determined by whether pixels are classified to the right or wrong predicted categories. Meanwhile, Intersection-over-Union (IoU) which is the same as JS discussed above, and Aggregated Jaccard Index (AJI) which is based on the connected domain and more precise than IoU are also included. The definitions of the proposed indexes are listed below. As can be seen from Table 4, when the number of layers is the same as that of UNet, FGDC-net structure is superior to UNet structure in AJI index, while pixel-level F1 is only slightly different. Based on the definition of AJI, it shows that FGDC-net structure effectively suppresses FP value, so as to minimize the occurrence of falsely predicted areas in segmentation results. In addition, although pixel-level F1 does not change significantly after the padding of training images, the AJI index is further improved by using FGDC-net. The two major improvements of the network including padding of larger training patches and introducing FGDC modules bring positive effects on the segmentation ability of the model as shown in the last row of Table 4. In order to evaluate the effectiveness of three major optimizations in the network training, the ablation experiments based on FGDC-net model is shown in Table 5, in which BCELoss is applied as the loss function when mixing loss is not used. Similarly, when the dynamic training was not adopted, the training period epoch was set to a fixed number of 400 and the mixed probability was set as 0.5.
Table 5

Ablation experimental results of multiple training strategies.

HSV MixupMixed lossDynamic trainingAJIPixel-level F1
0.40360.7094
0.50930.8073
0.52580.8211
0.5675 0.8462
As shown in Table 5, after HSV Mixup operation was performed on the training samples, the performance of the model was significantly improved, indicating the HSV Mixup operation helps the network to better identify the nuclei with lighter staining, thus greatly improves the segmentation accuracy. Besides, since the participation of DiceLoss in the mixing loss, the attention mechanism of the proposed model is enhanced to detect and predict smaller nuclei, which improves the segmentation performances including AJI and F1 comparing to simply using BCELoss. The last row shows that dynamic training strategies further improve the generalization ability of the model with flexible probabilities of Full Mixup of training patches and the related changing of epochs.

Comparison of results on open datasets

The optimized algorithm with FGDC-net structure and improved training strategies was further verified on the open-source datasets of Kumar and MoNuSeg. A batch of classic methods based on different machine learning strategies including supervised learning, weakly supervised learning, and unsupervised learning methods from literatures were included in Tables 6 and 7 respectively.
Table 6

Comparison experimental results on Kumar dataset.

MethodAJIPixel-level F1IoU
Supervised learningFCN [33]0.3556 0.7809 ——
Mask-Rcnn [33]0.50020.7470——
CNN3 [8]0.50830.7623——
Weakly supervised learningQu et al.(5%) [34]0.49410.7540——
Pseudo EdgeNet [35]————0.6136
Unsupervised learningSIFA [36]0.39240.6880——
CyCADA [37]0.44470.7220——
Mihir et al. [38]0.53540.7477——
DDMRL [39]0.48600.7109——
Ours 0.5238 0.7655 0.6202
Table 7

Comparison experimental results on MoNuSeg dataset.

MethodAJIPixel-level F1IoU
Supervised learningFCN [40]0.35100.74600.4935
UNet++ [41]——0.74530.5892
deeplabv3+ [42]——0.71850.5619
DB-UNet [43]——0.74210.6016
SegNet [44]——0.7526——
Weakly supervised learningBoundingBox [45]——0.73720.5839
Self-loop(20%) [46]——0.7711——
SSL(10%) [47]0.5501————
Ours 0.5512 0.7715 0.6297
Table 6 includes several classical supervised learning models, as well as the results of weakly supervised and unsupervised learning models, and most of current weakly supervised and unsupervised learning models have lower performances than supervised deep learning models. However, due to the scarcity of labeled of segmented pathological image samples, weakly supervised and unsupervised learning models are the inevitable trend of network development. For our algorithm on Kumar dataset, AJI reaches 52.38%, which was better than the previous best CNN3 50.83%. F1 reached 76.55%, not much different from FCN, MASK-RCNN and CNN3. Generally, the proposed model belongs to the category of unsupervised learning, our method achieves best overall performances except for Pixel-level F1 index, which is slightly lower than that of FCN model. Based on the proposed algorithm, the sample segmentation results of different organs from all three datasets are shown in Fig 7, where the original image is shown in the top left of each organ block, and three local ROIs are randomly selected as orange, blue and yellow windows from the original image, and the last column in each organ block shows four zoomed areas with detailed nuclei boundaries marked as green lines.
Fig 7

Samples segmentation results of multiple organs from three datasets.

The comparison results of MoNuSeg dataset between multiple methods are shown in Table 7. Similar to that of Table 6, the only difference is there is leak of literature on unsupervised learning methods in MoNuSeg dataset at present. For our method on MoNuSeg dataset, the pixel-level F1 index reaches 77.15%, which is more than that of classic FCN, DB-UNet, and SegNet. Another segmentation IoU index of our algorithm is also better than the previous best DB-UNet by more than two percents. The results show that FGDC-Net could effectively improve the segmentation effect of HE staining pathological images, and Fig 7 shows the partial segmentation results.

Conclusions

In this paper, a fully automated pipeline based on Feature Global Delivery Connection Network to locate precise nuclear boundaries in HE-stained pathological images is proposed. To achieve the deep learning-based image segmentation in a fully automatic way, the framework is further enhanced in each stage of the pipeline, including the automatic training sample generation, the new segmentation module structural design, and a flexible training strategy. First, the unsupervised training sample selection method generates extended image patches and their corresponding binary label maps, in which the certain size difference between original patches and their labels leads to the asymmetric model design of FGDC-net, and improves the segmentation performance by the attention mechanism. Meanwhile, by mixup of image patches in HSV color spaces, higher quality image patches with better nuclei pixels distributions are generated for the network training, which further improves the effectiveness of the unsupervised deep learning with higher efficiency than the supervised methods. Second, the proposed FGDC module abandons the jumping connections between codecs, which achieves feature selection by learning the relationships between feature channels at each layer and pass information selectively. The asymmetric design of FGDC-net also forms an attention mechanism by increasing the local receptive field of the central region to be segmented. Third, the probability of mixing different image patches for the training is constantly adjusted by the combination of both BCE and Dice loss functions, and the epoch of training is also affected. These dynamic training strategies fully consider both mixed and unmixed samples, and further improve the generalization ability of the model. Comparing to existing state-of-the-art supervised, weakly supervised and unsupervised learning methods dealing with pathological image segmentation, the proposed method shows better overall performances considering both efficiency and accuracy. Our unsupervised segmentation algorithm does not require human participation in constructing the training set, replacing the most time- and labor-intensive outlining and labeling parts of traditional deep learning, and significantly improving the efficiency of image analysis by automatically generating reliable labels for model training. The optimized method improves the accuracy of the unsupervised method to 0.7655 and 0.7715 on the publicly available datasets Kumar and MoNuSeg respectively. It is further demonstrated that the presented algorithm significantly improves the training sample production efficiency while achieving a certain degree of improvement in segmentation accuracy. Meanwhile, the proposed FGDC-net is designed to optimize the information transfer between codecs, which filters and integrates the shallow information and then control the importance of each layer feature as a form of weight. The modules further selectively integrate deep features and improves feature representation while preserving the information exchange between encoder and decoder. Experimental results show that the proposed segmentation method can achieve high segmentation accuracy on both clinical and public available datasets, provide more accurate feature indicators related to pathological images for cancer analysis and diagnosis, and will promote the application of automatic quantitative pathological image analysis technology in clinical aid diagnosis. To further improve the effectiveness of deep learning on pathological image researches, it will be an inevitable trend towards weak and unsupervised development because of the scarcity of HE and other stained pathological images and labeling in deep learning. In the future, the construction of fuzzy neural network model for medical image analysis will be further studied to improve the efficiency of deep learning from both sample generation and network training. While solving the problem of low efficiency and subjectivity of manual drawing, the network optimization training is realized efficiently and controllable. Furthermore, the histological and microscopic structure of individual cells was accurately measured according to the independent cell boundary, and the correlation analysis between morphological characteristics and pathological classification will be carried out to explore the cytological mechanism corresponding to tumor lesions, and to establish a pathological imaging diagnosis and treatment model reflecting pathological heterogeneity.

Original pathological images of Fig 1 in the main body.

(ZIP) Click here for additional data file.

Original pathological images of Fig 2 in the main body, and its corresponding original image.

(ZIP) Click here for additional data file.

Original pathological images of Fig 6 in the main body, and its corresponding original image.

(ZIP) Click here for additional data file.

Original pathological images of Fig 8 in the main body, and its corresponding original images of different organs.

(ZIP) Click here for additional data file.

Our own dataset including original meningioma images 1 to 12.

(ZIP) Click here for additional data file.

Our own dataset including original meningioma images 13 to 24.

(ZIP) Click here for additional data file.

Our own dataset including original meningioma images 25 to 36.

(ZIP) Click here for additional data file.

Our own dataset including original meningioma images 37 to 48.

(ZIP) Click here for additional data file.

Our own dataset including original meningioma images 49 to 60.

(ZIP) Click here for additional data file.

Original document of approval from local ethics committee.

(PDF) Click here for additional data file. 21 Jun 2022
PONE-D-22-11182
Segmentation of HE Stained Histopathological Images Based on Feature Global Delivery Connection Network
PLOS ONE Dear Dr. Shi, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
 
Your manuscript has been revised by three experts in the field and their reports are attached here below for your reference. The are some major concerns that need to be addressed carefully in a major round of revisions, and in particular:
 
The title doesn't seem to reflect the content of the manuscript, as it suggests that the proposed method can segment any structure, whereas in fact only experiments for cell segmentation are reported It is unclear how the proposed approach improves on the existing methods. Please clarify the main points of strength of your procedure comapred with the state-of-the art. At least one image should present the result of the detected precise nuclear boundaries. The experimental design - and in particular the train/test split needs clarification. Please submit your revised manuscript by Aug 05 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Francesco Bianconi, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. We suggest you thoroughly copyedit your manuscript for language usage, spelling, and grammar. If you do not know anyone who can help you do this, you may wish to consider employing a professional scientific editing service. Whilst you may use any professional scientific editing service of your choice, PLOS has partnered with both American Journal Experts (AJE) and Editage to provide discounted services to PLOS authors. Both organizations have experience helping authors meet PLOS guidelines and can provide language editing, translation, manuscript formatting, and figure formatting to ensure your manuscript meets our submission guidelines. To take advantage of our partnership with AJE, visit the AJE website (http://learn.aje.com/plos/) for a 15% discount off AJE services. To take advantage of our partnership with Editage, visit the Editage website (www.editage.com) and enter referral code PLOSEDIT for a 15% discount off Editage services.  If the PLOS editorial team finds any language issues in text that either AJE or Editage has edited, the service provider will re-edit the text for free. Upon resubmission, please provide the following: The name of the colleague or the details of the professional service that edited your manuscript A copy of your manuscript showing your changes by either highlighting them or using track changes (uploaded as a *supporting information* file) A clean copy of the edited manuscript (uploaded as the new *manuscript* file) 3. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information. 4. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 5. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability. "Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized. Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access. We will update your Data Availability statement to reflect the information you provide in your cover letter. 6. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Partly ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: N/A Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In the paper „ Segmentation of HE Stained Histopathological Images Based on Feature Global Delivery Connection Network”, the authors proposed a method to segment structures in histological data. The main advantage of the method is the application of a few datasets, including public ones. - the title is confusing- suggests that methods can segment any structure, whereas in the paper are presented experiments for cell-segmentation only - lack of image that presents the result of detected precise nuclear boundaries - the new sentence should start with a capital letter - IOU metric is good for localization, and it is not the best for instance segmentation tasks - How the data split of the “own dataset” was done? On a patient level? - “ The correction-feedback process continued until the error rate of mistaken labeled nuclear pixels in each image was less than 5%, which made the labeling results dependable as the golden standards for the following training and validations.” – how the 5% value was established? - “The correction-feedback process continued”- what type of correction was applied? - EMC- please add an explanation for this shortcut - it is not necessary to add formulas of a well-known metrics - Table 6- the best results are for FCNN, in table 7 results are very close to the result for Self-loop. What advantage do we have by applying the proposed method? Reviewer #2: An effective method named Feature Global Delivery Connection Network is developed to segment HE stained histopathology image. I believe that papers published in PLOS ONE, should demonstrate either a new method proven by a strong theoretical background or new methods with a mature level of implementation. The method presented by the authors have a strong theoretical background. The overall idea is very interesting and the results are promising. However, I have some questions to be addressed by the authors: 1. Abstract is too long. Make it short and informative. 2. The topic is interesting and readers will be more interested to read this manuscript. The methods and ideas used in the manuscript are perfect. 3. I recommend that the author should check the manuscript properly regarding grammatical mistake, text and figure organization. 4. Check all the Figure's explanations in the body text and explain the Figure's and Table's caption in detailed and clearly. Overall, I think this paper can be accepted after minor revision. Reviewer #3: 1. Contribution in this paper is insignificant. 2. Results are not improved by the present method in considerable way. 3. Methods use are also not innovative. 4. Similar systems exist and without any significant gain, it is meaningless to publish another paper. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 14 Jul 2022 Reviewer #1 In the paper, Segmentation of HE Stained Histopathological Images Based on Feature Global Delivery Connection Network”, the authors proposed a method to segment structures in histological data. The main advantage of the method is the application of a few datasets, including public ones. Question 1. the title is confusing- suggests that methods can segment any structure, whereas in the paper are presented experiments for cell-segmentation only. Answer: As the characteristic changes of normal cells after cancerization are mostly reflected in nuclei, the method is designed mainly focusing on nuclei segmentation. To make the title more specific, it has been modified to ‘Nuclei Segmentation of HE stained Histopathological Images Based on Feature Global Delivery Connection Network’. Question 2. lack of image that presents the result of detected precise nuclear boundaries. Answer: We found that the uploaded images were compressed by the submission system to make the size of combined PDF smaller, which made the resolutions of images in the final pages of generated manuscript be much lower than the original uploaded files. Therefore, the directly downloaded image files from the reviewer toolbar may be helpful to solve this problem, in which clear boundaries could be found at different magnifications. Meanwhile, we also improved the experimental segmentation results by brighter and broader lines to show clearer nuclei boundaries to replace the previous Figure 7. Question 3. the new sentence should start with a capital letter. Answer: We have double checked the full text and corrected these problems, which could be found in the new uploaded file. Question 4. IOU metric is good for localization, and it is not the best for instance segmentation tasks. Answer: Yes, IOU (Jaccard) index is a commonly used metric in object detection, which mainly reflects the overlapped area proportion between two objects. Here IOU is used to assess the localization precision of the predicted nuclear boundaries. Based on literature searching, IOU is also used as a main index in semantic segmentation, and can be found in some nuclei segmentation papers such as listed in Table 7. Therefore, we tested IOU metrics of the proposed algorithm for the comparison to existing methods especially on MoNuSeg dataset. Meanwhile, indexes which are better at dealing with under- and over-segmentation such as AJI and Pixel-level F1 were also used in evaluations of our experiments. Question 5. How the data split of the “own dataset” was done? On a patient level? Answer: The previous description of data split was no very clear and we have revised the paragraph accordingly. Here is the updated dataset split strategy. First, there were totally 60 cases of meningiomas blindly chosen on a patient level, in which 30 cases are from high-grade and other 30 cases are low-grade to ensure the comprehensiveness and diversity of the images. Second, one original image was randomly selected from each case, so we had 60 original images in our own dataset. Then, 10 images from each grade of cases were randomly selected as the test set, and there were totally 20 images for test. Finally, the remaining 40 images including half high-grade and half low-grade images formed the training set as shown in Table 1. The ratio between the training and test datasets is 2:1. Question 6. The correction-feedback process continued until the error rate of mistaken labeled nuclear pixels in each image was less than 5%, which made the labeling results dependable as the golden standards for the following training and validations.” – how the 5% value was established? Answer: Here the 5% value was an experience point, which was suggested by pathologists in our research team. Since the one of the main concerns of the proposed method is to generate emulational nuclei mask labels fully automatically to improve the effectiveness, the number of nuclei labels needed for training is quite huge. In practice, one 1536×2048 meningioma pathological image averagely contains about 800 nuclei that need to be labeled, so a total of about 16,000 nuclei needs to be labeled in 20 training images. Considering an acceptable error rate, our pathologists used 5% as the threshold of mistaken labeled nuclear pixels in checking the labeling results of MaZda software. The effectiveness of the automatic labeling was proved and described in our previous work, so we didn’t discuss this in details in this new work. Question 7. “The correction-feedback process continued”- what type of correction was applied? Answer: As described in Section 2.1.3, if the error rate of mistaken labeled nuclear pixels in each image was more than 5%, we checked the 5% mostly mistakenly labeled nuclei and manually correct their boundaries by pathologists, and then double check the new error rate of mistaken labeled nuclear pixels in the image. If the rate was less than 5%, the labels were considered as the golden standards of this image, and if not, the rest 5% mostly mistakenly labeled nuclei were manually corrected till the error rate of mistaken labeled nuclear pixels meted the requirement. A brief introduction of the correction-feedback process is also added into the manuscript. Question 8. EMC- please add an explanation for this shortcut. Answer: Sorry we couldn’t find shortcut named EMC in the manuscript. I think the Reviewer may mean ECM and it is the shortcut of Extra Cellular Matrix, which is one of three main structures in HE stained tissues including nuclei, cytoplasm and ECM. To be consistent to our previous work, we revised ECM to Extra Cellular Space (ECS) in the new text. The explanation of ECS is in the final paragraph of Introduction. We have capitalized the first letters to make it more visible. Question 9. it is not necessary to add formulas of a well-known metrics. Answer: We found some of papers concerning cell segmentation included the formulas of commonly used metrics by literature searching. As suggested, we have removed the formulas of well-known metrics including BCE Loss, Dice loss and Jaccard Similarity (Equations 9 - 11) to make the descriptions more concise. Question 10. Table 6- the best results are for FCN, in table 7 results are very close to the result for Self-loop. What advantage do we have by applying the proposed method? Answer: Yes, FCN makes the best Pixel-level F1 index as shown in Table 6, and our method take the second place in that column. Meanwhile, it is indeed that most of results are very close to ours including the Self-loop method. The main purpose of our proposed method is to build a framework that could fully automatically segment nuclei boundaries without manual intervention. Since most of the current deep learning-based methods need labeled nuclei samples by exports for training, the automatic generation of nuclei labels greatly helps to save time and efforts of labeling in our proposed framework. With the huge increase of efficiency, the errors of mistaken labeled nuclear pixels in nuclei label generation also affect the network training and then the accuracy of segmentation. As an unsupervised learning approach, our method has better segmentation performances than most of the supervised and weakly supervised learning-based methods, in which the weakly supervised learning methods including Self-loop also need a small number of labeling works by exports. Therefore, we think the main advantage of the proposed method is making good performances in increasing the accuracy with highly improving the efficiency, which may be suitable to be applied in clinical use in the future. Reviewer #2 An effective method named Feature Global Delivery Connection Network is developed to segment HE stained histopathology image. I believe that papers published in PLOS ONE, should demonstrate either a new method proven by a strong theoretical background or new methods with a mature level of implementation. The method presented by the authors have a strong theoretical background. The overall idea is very interesting and the results are promising. However, I have some questions to be addressed by the authors: Question 1. Abstract is too long. Make it short and informative. Answer: We have revised the Abstract to make it more concise and to the point. First, we deleted the less innovative step of ResBlock module, and the description of F1, IoU and AJI metrics. Second, the focus on nuclei segmentation was emphasized according to the title change. The total number of words has been reduced from 292 to 249. Question 2. The topic is interesting and readers will be more interested to read this manuscript. The methods and ideas used in the manuscript are perfect. Answer: Thanks for the comments. Question 3. I recommend that the author should check the manuscript properly regarding grammatical mistake, text and figure organization. Answer: We have double checked the manuscript by proof reading, and the grammatical mistakes, text and figure organization problems have been revised accordingly, especially the marked nuclei boundaries showing segmentation performances in Figure 7. Question 4. Check all the Figure's explanations in the body text and explain the Figure's and Table's caption in detailed and clearly. Answer: We have double checked Figure's explanations in the body text. All Figure’s and Table’s captions are explained emphatically as can be seen in the revised file. Overall, I think this paper can be accepted after minor revision. Reviewer #3: Question 1. Contribution in this paper is insignificant. Answer: We were focusing on resolving two difficulties of current automated histopathological images analysis researches. First, from the aspect of efficiency, the lack of labeled nuclei for training of deep learning-based segmentation approaches. We used asymmetric labels and HSV color transform to improve the consistency between the automatically generated pseudo labels and real samples. Second, from the aspect of accuracy, the loss of relationships between feature channels at each layer inside the segmentation network. We designed FGDC module to use sigmoid-like gating to assign weights to intra-layer feature channels of each layer to achieve feature screening. These two contributions made the proposed method obtain good segmentation performances as most of the current supervised and weakly-supervised methods have. Meanwhile, since the deep-learning based segmentation methods are going to mature and stable in recent years, big improvements of assessment metric are difficult and experimental results of most published methods are slightly improved according to literature searching, which are also shown in Table 6 and 7. Question 2. Results are not improved by the present method in considerable way. Answer: We designed three groups of experiments to show the improvements of the proposed method. First, to show the effectiveness of the proposed improvements including HSV Mixup in nuclei label generation, Mixed loss in dynamic training, and the FGDC module in network structure, we compare them with our previous work by ablation experiments, and the results in Table 4 and 5 show that those improvements are necessary to increase the segmentation accuracies. Second, since the nuclei mask labels are automatically generated rather than manually marked, most of unsupervised approaches have lower segmentation accuracies than other training strategies. The comparison experiments to other methods from literatures based on open datasets further proved the proposed method have similar or better segmentation performances in a fully automatic way. Question 3. Methods use are also not innovative. Answer: There are multiple strategies dealing with nuclei segmentation because of the clinical significance, and deep leaning-based methods have been in the majority in recent years. Although high improved the segmentation performances, the biggest problem of deep-learning strategies is that a large number of labeled samples are needed for training, which greatly affect the application in clinic. In order to solve this problem, the main innovation of our proposed method is to generate nuclei mask labels fully automatic to save time and efforts in training, and keep the segmentation accuracy in a high level by designing new modules for the network. Therefore, the innovative of works mainly include asymmetric labels and HSV color transform in the stage of automatic training sample patches generation, and a redesigned FGDC-net model rather than the previous U-net based segmentation framework. Experimental results on multiple organ datasets also proved the effectiveness of the proposed improvements. The corresponding discussions is in Paragraph 2 of Conclusions. Question 4. Similar systems exist and without any significant gain, it is meaningless to publish another paper. Answer: Comparing to other methods and our previous work, the main innovations of FGDC-net mainly include two aspects. First, in the stage of automatic training sample patches generation, asymmetric labels and HSV color transform are employed to improve the consistency between the generated pseudo labels and real samples. Second, the network model is redesigned as a FGDC-net rather than the previous U-net based segmentation framework. FGDC module is proposed to use sigmoid-like gating to assign weights to intra-layer feature channels of each layer to achieve feature screening, which increases the attentions on feature maps of nuclei segmentation. Ablation experimental results showed that the above improvements made FGDC-net further improved the segmentation performances than the previous U-net based network based on the same datasets. Then, we used FGDC-net to test on two open datasets and also achieved good performances. The above innovations in this manuscript are significantly different from other existing methods rather than a simple repeated work. The corresponding discussions is in Paragraph 2 and 3 of Conclusions. 15 Aug 2022 Nuclei Segmentation of HE Stained Histopathological Images Based on Feature Global Delivery Connection Network PONE-D-22-11182R1 Dear Dr. Shi, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. Please note that Reviewer #2 requested minor changes to be intorduced in the final version (see below). An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Francesco Bianconi, Ph.D. Academic Editor PLOS ONE Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: N/A ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: This manuscript proposed a framework for nuclei segmentation. Comparison to the other state-of-the-art methods shows the superiority of the proposed method. I cannot see what else could be added at this point - maybe with just one exception: What is your strategy for making this available for the target group (pathologists)? Please discuss this subject in your paper for better understanding. Over all study looks interesting. I have seen the changes made to the paper, and I agree with its publication after correcting the errors: (1) Neclei (change to Nuclei) Segmentation of HE Stained Histopathological Images Based on Feature Global Delivery Connection Network. (2) Please check line number 214. It is confusing. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: Yes: Subrata Bhattacharjee ********** 19 Aug 2022 PONE-D-22-11182R1 Neclei Segmentation of HE Stained Histopathological Images Based on Feature Global Delivery Connection Network Dear Dr. Shi: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Prof. Francesco Bianconi Academic Editor PLOS ONE
  19 in total

1.  Triple U-net: Hematoxylin-aware nuclei segmentation with progressive dense feature aggregation.

Authors:  Bingchao Zhao; Xin Chen; Zhi Li; Zhiwen Yu; Su Yao; Lixu Yan; Yuqian Wang; Zaiyi Liu; Changhong Liang; Chu Han
Journal:  Med Image Anal       Date:  2020-07-18       Impact factor: 8.545

2.  Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images.

Authors:  Faisal Mahmood; Daniel Borders; Richard J Chen; Gregory N Mckay; Kevan J Salimian; Alexander Baras; Nicholas J Durr
Journal:  IEEE Trans Med Imaging       Date:  2020-10-28       Impact factor: 10.048

3.  Segmentation of Nuclei in Histopathology Images by Deep Regression of the Distance Map.

Authors:  Peter Naylor; Marick Lae; Fabien Reyal; Thomas Walter
Journal:  IEEE Trans Med Imaging       Date:  2019-02       Impact factor: 10.048

4.  SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.

Authors:  Vijay Badrinarayanan; Alex Kendall; Roberto Cipolla
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2017-01-02       Impact factor: 6.226

Review 5.  Histopathological image analysis: a review.

Authors:  Metin N Gurcan; Laura E Boucheron; Ali Can; Anant Madabhushi; Nasir M Rajpoot; B Yener
Journal:  IEEE Rev Biomed Eng       Date:  2009-10-30

6.  A Dataset and a Technique for Generalized Nuclear Segmentation for Computational Pathology.

Authors:  Neeraj Kumar; Ruchika Verma; Sanuj Sharma; Surabhi Bhargava; Abhishek Vahadane; Amit Sethi
Journal:  IEEE Trans Med Imaging       Date:  2017-03-06       Impact factor: 10.048

7.  Her2Net: A Deep Framework for Semantic Segmentation and Classification of Cell Membranes and Nuclei in Breast Cancer Evaluation.

Authors:  Monjoy Saha; Chandan Chakraborty
Journal:  IEEE Trans Image Process       Date:  2018-05       Impact factor: 10.856

Review 8.  The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a summary.

Authors:  David N Louis; Arie Perry; Guido Reifenberger; Andreas von Deimling; Dominique Figarella-Branger; Webster K Cavenee; Hiroko Ohgaki; Otmar D Wiestler; Paul Kleihues; David W Ellison
Journal:  Acta Neuropathol       Date:  2016-05-09       Impact factor: 17.088

9.  UNet++: A Nested U-Net Architecture for Medical Image Segmentation.

Authors:  Zongwei Zhou; Md Mahfuzur Rahman Siddiquee; Nima Tajbakhsh; Jianming Liang
Journal:  Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018)       Date:  2018-09-20

10.  Pathological prognostic factors in breast cancer. I. The value of histological grade in breast cancer: experience from a large study with long-term follow-up.

Authors:  C W Elston; I O Ellis
Journal:  Histopathology       Date:  1991-11       Impact factor: 5.087

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.