Literature DB >> 34873831

A hybrid approach based on deep learning and level set formulation for liver segmentation in CT images.

Zhaoxuan Gong1,2, Cui Guo1, Wei Guo1,2, Dazhe Zhao2, Wenjun Tan2, Wei Zhou1, Guodong Zhang1,2.   

Abstract

Accurate liver segmentation is essential for radiation therapy planning of hepatocellular carcinoma and absorbed dose calculation. However, liver segmentation is a challenging task due to the anatomical variability in both shape and size and the low contrast between liver and its surrounding organs. Thus we propose a convolutional neural network (CNN) for automated liver segmentation. In our method, fractional differential enhancement is firstly applied for preprocessing. Subsequently, an initial liver segmentation is obtained by using a CNN. Finally, accurate liver segmentation is achieved by the evolution of an active contour model. Experimental results show that the proposed method outperforms existing methods. One hundred fifty CT scans are evaluated for the experiment. For liver segmentation, Dice of 95.8%, true positive rate of 95.1%, positive predictive value of 93.2%, and volume difference of 7% are calculated. In addition, the values of these evaluation measures show that the proposed method is able to provide a precise and robust segmentation estimate, which can also assist the manual liver segmentation task.
© 2021 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine.

Entities:  

Keywords:  CT image; active contour model; convolutional neural networks; fractional differential; liver segmentation

Mesh:

Year:  2021        PMID: 34873831      PMCID: PMC8803306          DOI: 10.1002/acm2.13482

Source DB:  PubMed          Journal:  J Appl Clin Med Phys        ISSN: 1526-9914            Impact factor:   2.102


INTRODUCTION

The accurate segmentation of liver is important not only for radiation therapy planning but also for follow‐up evaluations. Liver segmentation from CT volumes is difficult because the intensity contrast between liver and its surrounding tissues is obscure. Quantification research in structural neuroimaging can benefit from accurate liver segmentation of human abdomen CT images, which is also vital to the success of computer‐aided surgeries. Recently, several liver segmentation methods have been proposed. Li et al. proposed an intensity bias and position constraint‐based level set model for liver segmentation. The level set model was used for initial liver segmentation. Graph cut was then applied to further optimize the segmentation results. Rafiei et al. combined 3D region growing and contrast enhancement algorithm to segment liver region. Tang et al. designed a multi‐scale CNN model for liver segmentation. The experimental results showed that their method was an effective way for liver segmentation. Peng et al. used graph cuts and a multi‐region‐based approach to obtain the liver surface. The segmentation was achieved by using an energy function which incorporates both region information and boundary. Mostafa et al. proposed an artificial bee colony optimization algorithm for liver segmentation. The centroids of clusters in the image were calculated by the artificial bee colony method. Mathematical morphology and region growing were then applied to achieve the final segmentation. Yan et al. used single statistical atlas registration to obtain an initial liver segmentation. Chemical shift‐based method was then applied for final segmentation. Wang et al. develop a priori statistical shape model for liver segmentation. The boundary information, the intensity information, and the sparse information were constructed to accurately segment the liver region. Ali et al. utilized artificial bee colony model and grey wolf optimization model for liver segmentation. The experiments showed that their method can obtain good results when applied to segment medical images. Goceri proposed a variational level set‐based model for liver segmentation. An adaptive‐signed pressure force function and a Sobolev gradient‐based model were jointly used for level set evolution. The experiment results showed that the level set contour can shrink to the edge of the liver accurately. Abd‐Elaziz et al. designed a region‐growing‐based method for liver segmentation. In their method, intensity analysis and preprocessing steps were combined to obtain the liver region. Yuan et al. proposed a fast marching and improved fuzzy cluster method for liver segmentation. Fast marching method and convex hull algorithm were used for initial liver's boundary detection. An improved fuzzy cluster method was then applied for refine the segmentation result. Wang et al. presented a sparse dictionary and hole filling method for liver segmentation. Sparse coding was used to obtain the initial liver boundary of the image, and a hole filling method was designed for liver boundary completion and smoothing to obtain the final segmentation results. Mir et al. proposed an automatic liver segmentation model. In their method, adaptive filter was used to reduce noise. Three dimensional region growing and the combination of morphological operators were combined to obtain the liver region. Chartrand et al. presented a laplacian mesh optimization method for liver segmentation. The initial liver contour was obtained by manual delineation. Laplacian mesh optimization was then used to refine the segmentation. Zareei and Karimi used a preprocessing model to obtain an initial segmentation close to the liver's boundary and then implemented a combination of gradient vector flow and balloon energy to improve the initial segmentation. Kitrungrotsakul et al. proposed a graph model for liver segmentation. Clustering algorithm was applied to construct graph which can further reduce the computational time. And liver segmentation can be achieved by their graph cut model. Altarawneh et al. proposed an improved distance regularization level set model for liver segmentation. In their method, a new balloon force was designed to discourage the evolving contour from exceeding the liver boundary, which can improve the segmentation accuracy effectively. Qin et al. proposed an intensity‐based CNN for liver segmentation. An entropy‐based saliency map was built by multinomial classification, and CNN was constructed and trained to predict the probability map of the liver boundary. Silva et al. used linear iterative clustering algorithm and probabilistic atlas in a deep convolutional neural networks (CNNs) to obtain an initial liver contour; 3D Chan‐Vese active contour model was then applied to acquire the final segmentation. Feng et al. used simple U‐net model for liver segmentation, and the experiment results showed the effectiveness of their method. Gloger et al. presented a fully automatized method for liver segmentation, which combined model knowledge and probability maps to delineate the liver contour. Ali et al. proposed a clustering and energy optimization model for liver segmentation. The experiment results demonstrated that their method obtained better mean values in terms of Jaccard Index and Dice Coefficient. Mostafa et al. proposed a whale optimization algorithm for liver segmentation. Whale optimization algorithm can remove a great part of non‐liver region from the image. Liver region was extracted by user interaction, and the morphological operations refined the final segmentation. Saito et al. developed a statistical shape model for liver segmentation. The statistical shape model‐guided expectation‐maximization algorithm was first used to obtain the initial liver boundary; graph cut was then applied to refine the segmentation. Eapen et al. proposed a Bayesian level set framework for liver segmentation. The level set contour was initialized by Bayesian probability model, level set evolution was achieved by using an energy function. Zheng et al. proposed a texture feature‐based method to extract the liver region; the liver boundary was obtained by the random walk algorithm. In the work by Yang et al. the value information and the spatial relationship between pixels were utilized to extract the liver region. A parallel algorithm was designed for further refining the segmentation. Trabelsi et al. proposed an active shape model to obtain the liver region. B‐spline registration was first applied to obtain the initial liver region. Active shape model was then applied to obtain the accurate liver segmentation. Although previous works have made great progress in improving the segmentation accuracy, most of them fail to extract the boundary of the liver accurately. In our method, an intensity constrained level set model is designed to refine the segmentation of the output of the CNN. The level set contour can be close to the liver boundary during the evolution, which increases the segmentation accuracy effectively. In this paper, we propose to develop a fully automatic method for liver segmentation. First, fractional differential is used to enhance the image. A deep CNN is then applied to extract the initial liver region. Maximum connectivity model is designed to refine the segmentation. The final segmentation is achieved by the level set evolution. Figure 1 shows the pipeline of the proposed framework Figures 1.
FIGURE 1

The pipeline of the proposed framework

The pipeline of the proposed framework Examples of fractional differential enhancement. The first row: original images; second row: results after applying fractional differential enhancement Structure of the convolutional neural network

MATERIALS AND METHODS

Fractional differential enhancement

Fractional differential is used as preprocessing step so that the contrast of liver and other tissues can be enhanced in each transaxial slice. Let be a signal, t is the discrete variable, t = 1, 2,…n, and the differential operator v can be denoted by: In the area of digital image, fractional differential can be defined as: The fractional differential operator is constructed to preserve the low‐frequency contour features of the liver region and improve the overall texture. Given an image , the fractional differential enhancement image in our method is designed as: where is the order differentiation operator. Fractional differential enhancement highlights the fine details of the object, which can improve the contrast between liver and the surrounding tissues. Fig. 2, 3 exhibits the result of fractional differential enhancement.
FIGURE 2

Examples of fractional differential enhancement. The first row: original images; second row: results after applying fractional differential enhancement

FIGURE 3

Structure of the convolutional neural network

Convolutional neural networks

The proposed CNN model is an 11‐layer deep structure, which is composed of down‐sampling stage and up‐sampling stage. The down‐sampling stage adopts several convolutional layers, each followed by a rectified linear unit (ReLU), and the kernels of max‐pooling is 2 × 2. After training the network, the connected component analysis is used to divide all labeled voxels into several connected components; the largest component is selected as the final liver region. We fine tune the network with the following parameters: batch size = 2, base learning rate = 0.00001, epoch = 10, Adam, and Relu are used as the optimizer and the activation function, respectively.

Level set evolution

Distance regularized level set evolution intensity constrained (DRLSE) is used in our level set model. Based on the DRLSE model, we designed an intensity‐constrained term which can guide the evolution of the level set contour. The final liver segmentation can be achieved by the evolution of DRLSEIC model. An edge‐based information is used to define the external energy. Let be an image on a domain , we define an edge indicator function by where is a Gaussian kernel with a standard deviation . The energy functional of DRLSE model is defined as follows: Where , and are positive parameters and fixed in this study. The energy functional ,, and are defined by: where and are the Dirac delta function and the Heaviside function, respectively, p is a potential function:. ,, are the penalty term, the length term, and the area term, respectively. The regularized versions of and are defined as: The parameter is usually set to 1.5. The output of CNN can be viewed as a label image , which is a binary map such that for in the label region and otherwise. For a label image , we let the level set function take negative values for , and positive values for . Therefore, the zero level contour of the level set function can be viewed as the boundary of the region of interest (ROI), which is labeled by . The zero level contour is denoted by . The initial liver class can be obtained by the statistical information of image , which is defined as: where is the mean intensity value of the liver class, and is its variance. Then, the intensity range of the liver region can be estimated by: An intensity constrained term is designed based on the intensity range of the liver region. The energy of the intensity‐constrained term is designed as: The intensity‐constrained term enables the level set contour to evolve inside the liver region, which can improve the segmentation accuracy effectively. The final energy function of DRLSEIC model is formulated as follows: This energy functional (16) can be minimized by solving the following gradient flow:

RESULTS

Our method has been validated on two databases 3D‐IRCADband LiTs 2017. The LiTS dataset provides 130 scans and segmentation labels for liver. And 3D‐IRCADb dataset provides 20 scans. One hundred ten subsets were used for training, and 40 subsets were used for testing. The training data and the testing data were separated. Segmented tumor and liver are merged into the whole liver. The data were collected from different hospitals, and the resolution of the CT scans varies between 0.45 mm and 6 mm for intra‐slice and between 0.6 and 1.0 mm for inter‐slices (512 × 512pixels), respectively. Unless otherwise specified, the following parameters are fixed in this paper: , The computation was done on a Windows 10 server with an Intel Xeon silver 4210R CPU (2.4 GHz and 64 GB memory) and Nvidia GPU GeForce Titian RTX.

Effectiveness of the proposed method

Figure 4 shows three liver labels segmentation results of the proposed method. Figure 4a,b is the segmentation results obtained by our method. Figure 4c,d is the corresponding manual segmentations. It can be seen that the results of our method are quite similar to those of the manual segmentations. Figure 5 exhibits the coronal view of segmentation results for the liver of one test image using our method. The Green lines and the red lines are the manual segmentation and the proposed method's segmentation, respectively. From the picture we can see that the proposed method's segmentation is very close to the manual segmentation.
FIGURE 4

3D view of the segmentation results for liver labels of three test images using our method. (a and b) The segmentation results by our method. (c and d) The corresponding manual segmentation

FIGURE 5

Coronal view of the segmentation results of liver labels by our method

3D view of the segmentation results for liver labels of three test images using our method. (a and b) The segmentation results by our method. (c and d) The corresponding manual segmentation Coronal view of the segmentation results of liver labels by our method We compared the performance of CNN + DRLSE with CNN on the same training and testing sets. An example of the segmented liver in a subject is illustrated in Figure 6. It can be seen that CNN model (Figure 6a, red line) produces poor segmentations on certain areas, mainly because of the low contrast between those areas and other segmented region. The result of CNN + DRLSEIC (Figure 6b, red line) is mostly overlapping with the ground‐truth segmentation (green line) and shows fewer false‐positive labeling.
FIGURE 6

Example of a liver segmentation using our method. (a) Results of convolutional neural network (CNN); (b) results of CNN + DRLSEIC

Example of a liver segmentation using our method. (a) Results of convolutional neural network (CNN); (b) results of CNN + DRLSEIC

Qualitative evaluation of the segmentation accuracy

Five image spatial metrics were adopted to evaluate the algorithm performance between automatic and manual segmentation, namely Dice Coefficient (DC), true positive rate (TPR), volume difference (VD), Jacard Index (JI), and positive predictive value rate (PPV). The definitions of each of the image metrics are given in Equations (18), (19), (20), (21), and 22, respectively. where S is the segmentation result, G is the ground truth, and is the complement operator of . The border voxels of the segmentation and the ground truth are represented as , . For each voxel p along a given border, the closest voxel along the corresponding border in the other result is given by , or , . The mean surface distance is defined as: where N1 and N2 are the numbers of voxels on the border surfaces of the segmentation and ground truth. The hausdorff surface distance (HSD) is similar to the mean surface distance (MSD), which is defined as: The performance of our method was compared against five state‐of‐the‐art methods: chan‐vese (CV) model, geodesic active contours (GAC) model, DRLSE model, selective binary and gaussian filtering regularized level set (SBGFRLS) model, and local binary fitting (LBF) model. It can be seen from Figure 7 that our proposed approach yielded average Dice, JI, PPV, and TPR, respectively. The median dice scores reach 0.961 for the proposed method, followed by 0.912 for DRLSE, 0.763 for CV model, 0.772 for image visual control (IVC) model, 0.744 for LBF model, and 0.752 for GAC model. The median JI scores reach 0.941 for the proposed method, followed by 0.884 for DRLSE, 0.733 for CV model, 0.779 for IVC model, 0.714 for LBF model, and 0.682 for GAC model. The median PPV scores reach 0.948 for the proposed method, followed by 0.894 for DRLSE, 0.748 for CV model, 0.77 for IVC model, 0.742 for LBF model, and 0.751 for GAC model. The median TPR scores reach 0.978 for the proposed method, followed by 0.891 for DRLSE, 0.879 for CV model, 0.883 for IVC model, 0.914 for LBF model, and 0.878 for GAC model. All the five state‐of‐the‐art methods produced non‐liver region during level set evolution; the proposed method can control the level set contour to evolve inside the liver region. Therefore, the proposed method outperformed other methods in terms of the above several metrics.
FIGURE 7

Quantitative comparison of the proposed method with CV, LBF, distance regularized level set evolution (DRLSE), IVC, and GAC

Quantitative comparison of the proposed method with CV, LBF, distance regularized level set evolution (DRLSE), IVC, and GAC The VD values of liver segmentation are presented in Table 1. It can be seen that the proposed method obtained a very low VD value for most of the cases. However, it is obvious that case 05 and case 27 received unsatisfactory results, mainly because more misclassified voxels were produced, which led to a significant decrease in the quantity of the VD values.
TABLE 1

The detail index of the proposed method and manual segmentation in terms of volume difference

DatasetVD (%)DatasetVD (%)DatasetVD (%)DatasetVD (%)
Case 010.055Case 110.04Case 210.028Case 310.047
Case 020.027Case 120.037Case 220.068Case 320.015
Case 030.011Case 130.006Case 230.046Case 330.022
Case 040.083Case 140.048Case 240.041Case 340.023
Case 050.112Case 150.137Case 250.081Case 350.061
Case 060.072Case 160.022Case 260.077Case 360.039
Case 070.045Case 170.013Case 270.194Case 370.019
Case 080.077Case 180.017Case 280.052Case 380.017
Case 090.092Case 190.058Case 290.036Case 390.051
Case 100.053Case 200.034Case 300.044Case 400.083

Abbreviation: VD, volume difference.

The detail index of the proposed method and manual segmentation in terms of volume difference Abbreviation: VD, volume difference. The number of convolutional layer and up‐sampling layer had great impact on the segmentation accuracy of a CNN. To select an optimal structure, four different convolutional layer and up‐sampling layer were validated. Resulting evaluation metrics are summarized in Table 2. From the table, we can observe that the structure of 5 conv&5 up‐sampling receives best performance. The input image size is , when 6 max pooling are applied, it is difficult to extract features from the feature map when 6 max pooling are applied. Therefore, the performance of the proposed CNN reduced with more extent compared with using five layers structure.
TABLE 2

Accuracy for different numbers of convolutional layers and up‐sampling layers

Metrics3 conv&3 up‐sampling4 conv&4 up‐sampling5 conv&5 up‐sampling6 conv&6 up‐sampling
Dice (%)0.90 ± 0.030.91 ± 0.020.958 ± 0.0210.84 ± 0.05
TPR (%)0.87 ± 0.030.835 ± 0.040.971 ± 0.0220.911 ± 0.042
VD (%)0.15 ± 0.030.15 ± 0.050.05 ± 0.0340.35 ± 0.06
JI (%)0.82 ± 0.020.835 ± 0.020.921 ± 0.0210.721 ± 0.061
PPV (%)0.961 ± 0.030.955 ± 0.040.952 ± 0.0310.912 ± 0.021
MSD (mm)15.33 ± 4.1311.91 ± 2.279.58 ± 2.9712.77 ± 3.35
HSD (mm)5.74 ± 0.924.94 ± 1.323.44 ± 1.095.04 ± 1.03

Abbreviations: JI, Jacard Index; PPV, positive predictive value; TPR, true positive rate; VD, volume difference.

Accuracy for different numbers of convolutional layers and up‐sampling layers Abbreviations: JI, Jacard Index; PPV, positive predictive value; TPR, true positive rate; VD, volume difference. The results of different network structure in terms of several evaluation metrics are recorded in Table 2. The comparison of the values of these metrics shows that the network structure of using five convolutional layers and five up‐sampling layers gave more robust performance, achieving a mean Dice of , a mean TPR of , a mean VD of , a mean JI of , and a mean PPV of . Based on this experiment, a network of five convolutional layers and five up‐sampling layers was established as the optimal structure of the proposed CNN. We exhibit the influence of the level set model on segmentation accuracy in Table 3 and present the comparison of dice values with and without the level set model. It can be observed that the level set model can increase the segmentation accuracy by 1–2 percent. The reason lies in that the proposed level set model can detect clearer boundaries and thus improve the segmentation results.
TABLE 3

Comparison of our model with and without the level set evolution

MetricsCNNCNN + DRLSEIC
Dice (%)0.941 ± 0.0140.952 ± 0.017
TPR (%)0.933 ± 0.0210.944 ± 0.015
VD (%)0.14 ± 0.030.09 ± 0.015
JI (%)0.872 ± 0.0110.891 ± 0.021
PPV (%)0.914 ± 0.0150.942 ± 0.019
MSD (mm)11.12 ± 3.049.52 ± 2.74
HSD (mm)4.28 ± 1.023.28 ± 0.92

Abbreviations: CNN, convolutional neural network; JI, Jacard Index; PPV, positive predictive value; TPR, true positive rate; VD, volume difference.

Comparison of our model with and without the level set evolution Abbreviations: CNN, convolutional neural network; JI, Jacard Index; PPV, positive predictive value; TPR, true positive rate; VD, volume difference. We compared our method with other four CNN models. Table 4 shows results for the U‐net, U‐net++, Segnet, fully convolutional networks (FCN), and the proposed method. For a fairly comparison, we used five convolution layers for each model. The size of kernel was 3. From the table, we can see that the proposed network offered the most accurate segmentation results in comparison to the other four CNN methods in terms of Dice, TPR,VD, JI, and PPV.
TABLE 4

Comparison of different CNN segmentation methods

MetricsU‐netU‐net++SegnetFCNProposed
Dice (%)0.91 ± 0.030.931 ± 0.030.901 ± 0.020.82 ± 0.050.958 ± 0.02
TPR (%)0.88 ± 0.030.941 ± 0.030.931 ± 0.020.891 ± 0.030.951 ± 0.02
VD (%)0.12 ± 0.030.07 ± 0.040.15 ± 0.040.38 ± 0.050.07 ± 0.02
JI (%)0.85 ± 0.020.875 ± 0.030.781 ± 0.020.691 ± 0.030.901 ± 0.03
PPV (%)0.961 ± 0.030.955 ± 0.040.912 ± 0.020.902 ± 0.040.931 ± 0.02
MSD (mm)12.33 ± 2.8310.08 ± 3.0213.48 ± 3.5615.77 ± 4.659.27 ± 3.38
HSD (mm)4.48 ± 1.123.94 ± 1.024.74 ± 1.195.04 ± 1.033.13 ± 0.98

Abbreviations: CNN, convolutional neural network; JI, Jacard Index; PPV, positive predictive value; TPR, true positive rate; VD, volume difference.

Comparison of different CNN segmentation methods Abbreviations: CNN, convolutional neural network; JI, Jacard Index; PPV, positive predictive value; TPR, true positive rate; VD, volume difference. In our paired t‐tests, the significance level was set as 0.05. The p‐values for the paired t‐tests are summarized in Table 5. The p‐values of paired t‐tests show that the difference between our proposed method and the other three methods is significant.
TABLE 5

p‐values of paired t‐tests between our model and other four methods for Dice values

MetricsDice
U‐net vs. Ours 103
U‐net++ vs. Ours 102
Segnet vs. Ours 103
FCN vs. Ours 104
p‐values of paired t‐tests between our model and other four methods for Dice values

DISCUSSION

The novel hybrid semi‐automatic method proposed in the present study showed high accuracy in liver extraction. However, the evolution of the level set model is time‐consuming. In the future, we will try to accelerate the level set evolution with Compute Unified Device Architecture. Based on our liver segmentation results, we can identify tumor and vessels from the liver region. The proposed model can be implemented in a preoperative virtual liver surgery planning system to assist a surgeon to make an optimal treatment plan for a patient. The proposed method does not require any preprocessing, so it could be generally applied to other organs or other images. It might also be extended to medical images acquired from other imaging modalities such as MRI, PET, or ultrasound.

CONCLUSION

In this paper, we proposed a CNN framework for liver segmentation. In our method, fractional differential is first used to enhance the contrast of liver and its surrounding region. CNN is then designed to produce an initial label of the liver region. Finally, maximum connectivity is applied to remove the non‐liver region. Experiment results show that our method outperforms other method in terms of several evaluation metrics. We believe that the proposed method will find its utility in more applications in the area of CT segmentation.

CONFLICT OF INTEREST

The authors declare that there is no conflict of interest that could be perceived as prejudicing the impartiality of the research reported.

AUTHOR CONTRIBUTION

Conception and design: Zhaoxuan Gong, Guodong Zhang, Wenjun Tan, Dazhe Zhao, and Cui Guo. Development of methodology: Zhaoxuan Gong, Wei Guo, and Guodong Zhang. Writing, review, and/or revision of the manuscript: Zhaoxuan Gong, Wei Guo, Guodong Zhang, Wei Zhou, and Cui Guo.
  19 in total

1.  Fully automated liver segmentation using Sobolev gradient-based level set evolution.

Authors:  Evgin Göçeri
Journal:  Int J Numer Method Biomed Eng       Date:  2016-02-03       Impact factor: 2.747

2.  Active contours without edges.

Authors:  T F Chan; L A Vese
Journal:  IEEE Trans Image Process       Date:  2001       Impact factor: 10.856

3.  Liver Segmentation on CT and MR Using Laplacian Mesh Optimization.

Authors:  Gabriel Chartrand; Thierry Cresson; Ramnada Chav; Akshat Gotra; An Tang; Jacques A De Guise
Journal:  IEEE Trans Biomed Eng       Date:  2016-11-21       Impact factor: 4.538

4.  Liver Segmentation in Abdominal CT Images Using Probabilistic Atlas and Adaptive 3D Region Growing.

Authors:  Shima Rafiei; Nader Karimi; Behzad Mirmahboub; Kayvan Najarian; Banafsheh Felfeliyan; Shadrokh Samavi; S M Reza Soroushmehr
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2019-07

5.  Superpixel-based deep convolutional neural networks and active contour model for automatic prostate segmentation on 3D MRI scans.

Authors:  Giovanni L F da Silva; Petterson S Diniz; Jonnison L Ferreira; João V F França; Aristófanes C Silva; Anselmo C de Paiva; Elton A A de Cavalcanti
Journal:  Med Biol Eng Comput       Date:  2020-06-21       Impact factor: 2.602

6.  Feature Learning Based Random Walk for Liver Segmentation.

Authors:  Yongchang Zheng; Danni Ai; Pan Zhang; Yefei Gao; Likun Xia; Shunda Du; Xinting Sang; Jian Yang
Journal:  PLoS One       Date:  2016-11-15       Impact factor: 3.240

7.  Deep learning-based liver segmentation for fusion-guided intervention.

Authors:  Xi Fang; Sheng Xu; Bradford J Wood; Pingkun Yan
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-04-21       Impact factor: 2.924

8.  An effective method for computerized prediction and segmentation of multiple sclerosis lesions in brain MRI.

Authors:  Sudipta Roy; Debnath Bhattacharyya; Samir Kumar Bandyopadhyay; Tai-Hoon Kim
Journal:  Comput Methods Programs Biomed       Date:  2017-01-10       Impact factor: 5.428

9.  Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation.

Authors:  Wenjian Qin; Jia Wu; Fei Han; Yixuan Yuan; Wei Zhao; Bulat Ibragimov; Jia Gu; Lei Xing
Journal:  Phys Med Biol       Date:  2018-05-04       Impact factor: 3.609

10.  A hybrid approach based on deep learning and level set formulation for liver segmentation in CT images.

Authors:  Zhaoxuan Gong; Cui Guo; Wei Guo; Dazhe Zhao; Wenjun Tan; Wei Zhou; Guodong Zhang
Journal:  J Appl Clin Med Phys       Date:  2021-12-06       Impact factor: 2.102

View more
  1 in total

1.  A hybrid approach based on deep learning and level set formulation for liver segmentation in CT images.

Authors:  Zhaoxuan Gong; Cui Guo; Wei Guo; Dazhe Zhao; Wenjun Tan; Wei Zhou; Guodong Zhang
Journal:  J Appl Clin Med Phys       Date:  2021-12-06       Impact factor: 2.102

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.