Literature DB >> 35533205

RefineNet-based 2D and 3D automatic segmentations for clinical target volume and organs at risks for patients with cervical cancer in postoperative radiotherapy.

Chengjian Xiao1, Juebin Jin2, Jinling Yi1, Ce Han1, Yongqiang Zhou1, Yao Ai1, Congying Xie1,3, Xiance Jin1,4.   

Abstract

PURPOSE: An accurate and reliable target volume delineation is critical for the safe and successful radiotherapy. The purpose of this study is to develop new 2D and 3D automatic segmentation models based on RefineNet for clinical target volume (CTV) and organs at risk (OARs) for postoperative cervical cancer based on computed tomography (CT) images.
METHODS: A 2D RefineNet and 3D RefineNetPlus3D were adapted and built to automatically segment CTVs and OARs on a total of 44 222 CT slices of 313 patients with stage I-III cervical cancer. Fully convolutional networks (FCNs), U-Net, context encoder network (CE-Net), UNet3D, and ResUNet3D were also trained and tested with randomly divided training and validation sets, respectively. The performances of these automatic segmentation models were evaluated by Dice similarity coefficient (DSC), Jaccard similarity coefficient, and average symmetric surface distance when comparing them with manual segmentations with the test data.
RESULTS: The DSC for RefineNet, FCN, U-Net, CE-Net, UNet3D, ResUNet3D, and RefineNet3D were 0.82, 0.80, 0.82, 0.81, 0.80, 0.81, and 0.82 with a mean contouring time of 3.2, 3.4, 8.2, 3.9, 9.8, 11.4, and 6.4 s, respectively. The generated RefineNetPlus3D demonstrated a good performance in the automatic segmentation of bladder, small intestine, rectum, right and left femoral heads with a DSC of 0.97, 0.95, 091, 0.98, and 0.98, respectively, with a mean computation time of 6.6 s.
CONCLUSIONS: The newly adapted RefineNet and developed RefineNetPlus3D were promising automatic segmentation models with accurate and clinically acceptable CTV and OARs for cervical cancer patients in postoperative radiotherapy.
© 2022 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine.

Entities:  

Keywords:  automatic segmentation; cervical cancer; clinical target volume; deep learning; organs at risk

Mesh:

Year:  2022        PMID: 35533205      PMCID: PMC9278674          DOI: 10.1002/acm2.13631

Source DB:  PubMed          Journal:  J Appl Clin Med Phys        ISSN: 1526-9914            Impact factor:   2.243


INTRODUCTION

Cervical cancer is one of the most common gynecological malignancies and the second most prevalent cancer in females. Radiotherapy is one of the main treatment options for cervical cancer in both curative and adjuvant settings. With the development of intensity‐modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT), the irradiation to surrounding normal organs is reduced, as well as the associated acute and chronic toxicity compared with conventional 2D and 3D conformal radiotherapy. , IMRT and VMAT use numerous beam segments to modulate the beam intensity to deliver steep dose gradients and shapes to achieve conformal dose tightly to target volumes, thereby sparing the normal tissue. , Therefore, an accurate and reliable target volume delineation is critical for the safe and successful application of IMRT and VMAT in patients with cervical cancer. There is a clear consensus regarding the clinical target volume (CTV) in radical and postoperative radiotherapy settings using IMRT and VMAT for patients with cervical cancer. Manual delineation is still the standard practice in most clinics. However, manual delineation is not only time‐consuming, but also prone to intra‐ and interobserver variations. CTV variations of up to 19‐cm differences and twofold volume differences were reported, which resulted in significant dosimetric differences during IMRT and VMAT delivery. On the other hand, with the adoption of image‐guided and adaptive radiotherapy, a fast and accurate automatic segmentation of target volumes and organs at risk (OARs) is urgently needed. Previously, multi‐atlas‐based and hybrid techniques have been considered the state‐of‐the‐art for automatic segmentation. Atlas‐based methods used previous manually contoured targets to match the testing images and achieved reasonable accuracy on OARs segmentations, especially for head‐and‐neck cancer patients. However, it relies heavily on the accuracy of deformable image registration and selected atlases and requires significant manual edition. , On the other hand, CTV contouring for cervical cancer is different from OARs as CTV contains the gross tumor and subclinical malignant regions with unclear boundaries, which is heavily depending on the clinical experiences of oncologists. Torheim et al. used a machine learning method (Fisher's linear discriminant analysis) to contour cervical cancer automatically based on MRI images and achieved better results compared to each individual classifier models. However, handcrafted features are required for machine learning–based methods and may not be robust for varying image appearances. With the development and wide application of deep learning, deep learning–based automatic segmentation has shown a superior performance in the reduction of target volume delineation variation for many tumors. , , As for cervical cancer, three paralleled convolutional neural networks (CNNs) with the same architecture trained following different image preprocessing methods had been applied. , However, CNNs suffer from the problem of reducing the resolution of original images while increasing the ambiguity of object boundaries inevitably. Recently, the lightweight RefineNet was introduced to refine object detectors for autonomous driving, which generates high‐resolution semantic feature by fusing coarse high‐level features with finer grained low‐level features. The purpose of this study is to modify the RefineNet and develop a RefineNetPlus3D for the automatic segmentation of CTV and OARs for postoperative cervical cancer based on computed tomography (CT) images, as well as to investigate the accuracy of the RefineNetPlus3D‐based automatic segmentation algorithm by comparing it with several other deep learning methods.

MATERIALS AND METHODS

Patients and contours

Patients with cervical cancer under postoperative IMRT and VMAT in authors’ hospital from January 2018 to September 2020 were retrospectively reviewed in this study. All the patients were immobilized by a thermoplastic abdominal fixation device in the supine position. CT simulation was scanned from the iliac crest to the ischial tuberosities with a 16‐slice Brilliance Big Bore CT scanner (Philips Healthcare, Cleveland, OH) at 3‐mm thickness. Intravenous contrast was injected during CT scan to enhance the contrast of target volumes. CT images were transferred using the Digital Imaging and Communications in Medicine format and reconstructed using a matrix size of 512 × 512. Manual segmentations of the CTV and OARs were delineated and verified by two senior radiation oncologists with more than 10 years of clinical experience for cervical cancer and were taken as a ground truth for the evaluation of automatic segmentations. The target contour guideline of the Radiation Therapy Oncology Group (RTOG) 0418 and its atlas on the RTOG website was followed. After the delineation, central vaginal CTV and regional nodal CTV were interpolated into a combined CTV for the sake of easy modeling of automatic segmentation.

Automatic 2D and 3D segmentation models

The adapted RefineNet in this study consists of an encoder–decoder architecture, in which the left encoding part uses a residual network (ResNet50) as a backbone network to down‐sample and extract tumor features from original images progressively, and the right decoding part consists of a residual convolutional unit (RCU), chained residual pooling (CRP), and fusion to recover the features in the final mask with the same shape as in the original images, , as shown in Figure 1a. The ResNet layers in the encoding part can be naturally divided into four blocks according to the resolution of the output feature maps. The resolution of the feature map will be reduced to one half when passing from one block to the next. Typically, the final feature map output ends up being 32 times smaller in each spatial dimension than the original image. Figure 1b–d demonstrates the encoder–decoder architectures of fully convolutional networks (FCN), U‐Net, and context encoder network (CE‐Net) for comparison. , , , , , , , , , , , , , , , , , , , , , , ,
FIGURE 1

The architecture of 2D automatic segmentation models: (a) the architecture of lightweight RefineNet50; (b) the architecture of FCN; (c) the architecture of U‐Net; (d) the architecture of CE‐Net. CE‐Net, context encoder network; FCN, fully convolutional network

The architecture of 2D automatic segmentation models: (a) the architecture of lightweight RefineNet50; (b) the architecture of FCN; (c) the architecture of U‐Net; (d) the architecture of CE‐Net. CE‐Net, context encoder network; FCN, fully convolutional network In order to use the layer thickness information more efficiently for 3D medical images, a 3D automatic segmentation model, RefineNetPlus3D, was developed based on the 2D RefineNet model mentioned earlier with all 2D operations replaced with their corresponding 3D counterparts. In the RefineNetPlus3D, the encoder part aggregates semantic information by reducing spatial information to learn features from part to whole. The decoder part receives semantic information from the bottom. We replaced the whole RefineNet decoder part with the 3D Refine block. It combines the RCU, CRP, and fusion block. In the 3D Refine block, many ReLU activations and batch normalization were added to solve the problem of gradient vanishing in the RCU, CRP, and fusion. Additionally, the first layer of down‐ and up‐sampling layers was modified to a rate of 1/2 to decrease the feature loss problem. The RefineNetPlus3D has a shortcut connection that transfers low‐level features from the encoder to the decoder and proposes an efficient and generic way of fusing coarse high‐level features (rich semantic information for classification) with finer grained low‐level features (more details information for clear boundary) to generate high‐resolution semantic features. An architecture of the RefineNetPlus3D is shown in Figure 2. UNet3D and ResUnet3D architectures were also applied in this study for the evaluation of the performance of our developed RefineNetPlus3D. ,
FIGURE 2

The architecture of generated 3D automatic segmentation model: (a) the architecture of RefinenetPlus3D; (b) the detail of 3D Refine block (RCU, CRP, and fusion) in the RefinenetPlus3D. CRP, chained residual pooling; RCU, residual convolutional unit

The architecture of generated 3D automatic segmentation model: (a) the architecture of RefinenetPlus3D; (b) the detail of 3D Refine block (RCU, CRP, and fusion) in the RefinenetPlus3D. CRP, chained residual pooling; RCU, residual convolutional unit The training and testing for all the models were implemented using a GeForce RTX 2080 Ti graphics card. The training sets (which consist of CT images and manual segmentation labels) were used to tune the parameters of the networks with adopted data augmentation methods, such as random rotate, to enlarge the training sets. A weight decay of 0.8 and a learning rate policy of poly with an initial learning rate of 2e−4 for 44 training iterations and 1e−4 for 300 training iterations were applied for 2D and 3D models, respectively. The Dice‐coefficient and binary gross‐entropy loss function were used in the study for 2D and 3D models, respectively. The optimizer chose Adam that can quickly converge the network for 2D and 3D models. We chose 2 as the final batch size for the three‐dimensional network and 6 for two‐dimensional selection under computer performance constraints.

Model evaluation

The 2D and 3D models for CTV and OARs were trained and validated with randomly divided training and validation cohorts. Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), and average symmetric surface distance (ASSD) were applied to evaluate the performance of automatic models by comparing them with manual segmentations in the test data sets. The DSC is defined as where V pre represents the region of interest (ROI) automatically contoured by the deep learning algorithm, and V GT represents the ground truth ROI created by the oncologist. A value of 1 indicates a perfect concordance between two contours. ASSD is the average symmetric surface distance from points on the boundary of prediction to the boundary of ground truth and from points on the boundary of ground truth to the boundary of prediction : where A and B were the surface voxels. An ASSD value of 0 mm indicates perfect segmentation. The JSC is used to compare the similarities and differences between limited sample sets. The larger the JSC value, the higher the sample similarity : where A represents the ground truth, and B represents the predictive image.

Statistical analysis

The models were built using Pytorch1.5.0, Keras 2.4.0 and Python 3.7. The characteristics of patients were analyzed using Fisher's exact test and the Mann–Whitney U‐test. Statistical analyses were performed using SPSS version 19.0 (SPSS, Inc. IBM, Armonk, NY, USA) with a p < 0.05 considered to be statistically significant.

RESULTS

A total of 313 patients at a median age of 55 years old (range 21–80 years) with stage I–III cervical cancer were enrolled in this study. Patients were randomly divided into a training (251 patients) and validation set (31 patients) and a testing set (31 patients), respectively, with a total of 44 222 CT slices. Most patients were diagnosed as squamous cell carcinoma. Detailed characteristics of enrolled patients are shown in Table 1.
TABLE 1

Clinical characteristics of enrolled patients and images

Data sets
CharacteristicTraining setsValidation setsTesting sets p
Total number2513131
Age0.001
Mean54.0855.0353.47
Median555553
Range21–7827–8021–78
SD10.9810.598.80
Slice number35 32443944504
Histological type0.21
Squamous cell carcinoma2092426
Adenocarcinoma2274
Adenosquamous carcinoma700
Unknown1301
Clinical stage0.26
I1371923
II112128
III200

p Value is calculated from the univariate association test between subgroups. Mann–Whitney U‐test for continues variables, Fisher's exact test for categorized variables.

Clinical characteristics of enrolled patients and images p Value is calculated from the univariate association test between subgroups. Mann–Whitney U‐test for continues variables, Fisher's exact test for categorized variables. Figure 3 shows the performance of 2D automatic segmentation models in comparison with manual contours for the CTVs and OARs. Quantitative evaluation among four 2D models is shown in Table 2. The DSC for RefineNet, FCN, U‐Net, and CE‐Net for CTV contouring were 0.82, 0.80, 0.82, and 0.81 with a mean contouring time for these four models being 3.2, 3.4, 8.2, and 3.9 s respectively. The mean computing time of RefineNet, FCN, U‐Net, and CE‐Net for these OARs was around 3.9, 8.2, 4.8, and 4.7 s, respectively.
FIGURE 3

Typical automatic delineation results from 2D models: (a) clinical target volume contours in comparison with manual contours; (b) automatic delineation results of organs at risks in comparison with manual contours

TABLE 2

Performance evaluations of 2D automatic segmentation models for CTV and OARs

ParametersOARs/modelsRefineNetU‐NetCE‐NetFCN
JSCCTV0.720.710.700.68
Bladder0.920.910.910.92
SI0.850.860.860.86
FR0.950.950.950.94
FL0.950.940.950.94
Rectum0.830.810.820.82
DSCCTV0.820.820.810.80
Bladder0.950.950.940.96
SI0.900.900.910.91
FR0.970.970.970.97
FL0.970.960.970.97
Rectum0.880.870.890.88
ASSDCTV4.174.184.304.58
Bladder1.241.281.341.29
SI2.642.442.422.59
FR0.540.490.500.57
FL0.490.480.490.50
Rectum1.271.611.481.31
Contouring time (s)CTV3.28.23.93.4
Bladder3.98.33.83.8
SI3.98.23.64.1
FR3.98.24.23.6
FL3.98.13.84.1
Rectum3.98.03.93.3

Abbreviations: ASSD, average symmetric surface distance; CE‐Net, context encoder network; CTV, clinical target volumes; DSC, Dice similarity coefficient; FCN, fully convolutional network; FL, left femoral head; FR, right femoral head; JSC, Jaccard similarity coefficient; OARs, organs at risk; SI: small intestine.

Typical automatic delineation results from 2D models: (a) clinical target volume contours in comparison with manual contours; (b) automatic delineation results of organs at risks in comparison with manual contours Performance evaluations of 2D automatic segmentation models for CTV and OARs Abbreviations: ASSD, average symmetric surface distance; CE‐Net, context encoder network; CTV, clinical target volumes; DSC, Dice similarity coefficient; FCN, fully convolutional network; FL, left femoral head; FR, right femoral head; JSC, Jaccard similarity coefficient; OARs, organs at risk; SI: small intestine. Figure 4 shows the performance of 3D models through the visualization of automatically segmented CTV and OARs for one case of a cervical cancer patient. Quantitative evaluation for these three 3D models is shown in Table 3. The DSC for UNet3D, ResUNet3D, and RefineNetPlus3D was 0.80, 0.81, and 0.82, respectively, and a mean contouring time for these three models was 9.8, 11.4, and 6.4 s, respectively. The generated RefineNetPlus3D demonstrated a good performance with a DSC of 0.97, 0.95, 0.91, 0.98, and 0.98 for bladder, small intestine, rectum, right and left femoral heads, respectively. The mean computing time of the RefineNetPlus3D for these OARs was around 6.6 s.
FIGURE 4

Typical automatic delineation results from 3D models: (a)–(c) clinical target volumes in axial, sagittal and coronal views; (d)–(f) contours of organs at risks in axial, sagittal, and coronal views, where yellow lines represent manual contours, purple for RefinenetPlus3D, blue for 3DResUNet, and green for 3DUNet contours

TABLE 3

Evaluation of 3D automatic segmentation models for CTVs and OARs

ParametersOARs/modelsUNet3DResUNet3DRefineNetPlus3D
JSCCTV0.670.690.69
Bladder0.930.940.94
SI0.880.900.90
FR0.940.960.96
FL0.950.960.96
Rectum0.780.840.84
DSCCTV0.800.810.82
Bladder0.960.970.97
SI0.930.950.95
FR0.970.980.98
FL0.970.980.98
Rectum0.880.910.91
ASSDCTV3.563.462.13
Bladder0.590.480.30
SI1.681.451.02
FR0.340.230.16
FL0.290.200.15
Rectum1.370.920.61
Contouring time (s)CTV9.811.46.4
Bladder9.710.36.3
SI10.511.06.7
FR10.910.66.7
FL10.311.06.7
Rectum10.112.36.7

Abbreviations: ASSD, average symmetric surface distance; CTV, clinical target volumes; DSC, Dice similarity coefficient; FL, left femoral head; FR, right femoral head; JSC, Jaccard similarity coefficient; OARs, organs at risk; SI, small intestine.

Typical automatic delineation results from 3D models: (a)–(c) clinical target volumes in axial, sagittal and coronal views; (d)–(f) contours of organs at risks in axial, sagittal, and coronal views, where yellow lines represent manual contours, purple for RefinenetPlus3D, blue for 3DResUNet, and green for 3DUNet contours Evaluation of 3D automatic segmentation models for CTVs and OARs Abbreviations: ASSD, average symmetric surface distance; CTV, clinical target volumes; DSC, Dice similarity coefficient; FL, left femoral head; FR, right femoral head; JSC, Jaccard similarity coefficient; OARs, organs at risk; SI, small intestine.

DISCUSSION

Accurate and quick segmentations of target volumes and OARs are critical to the precise IMRT and VMAT optimization and delivery, as well as for the application of adaptive radiotherapy. In this study, new 2D and 3D automatic segmentation models were adapted and generated based on RefineNet for the CTV and OARs of patients with cervical cancer in postoperative radiotherapy. Both adapted 2D RefineNet and developed RefineNetPlus3D achieved a better performance in CTV segmentation and similar performance in OARs segmentation in comparison with other generally used deep learning algorithms with a shorter computing time. During IMRT and VMAT optimization, the radiation dose is usually prescribed to tumor target volumes to achieve adequate coverage, so as to maximize tumor control and minimize radiation toxicities. However, the poorly defined tumor‐to‐normal tissue interface of cervical cancer due to the lack of tissue contrast on CT images makes CTV contouring a challenging task and results in high intra‐ and interobserver variability. Deep learning–based automatic segmentation is increasingly investigated to improve the delineation consistency and accuracy. In this study, both 2D (RefineNet, CE‐net, U‐Net, FCN) and 3D (UNet3D, ResUNet3D, RefineNetPlus3D) automatic segmentation models based on deep learning were investigated to segment automatically the CTV of cervical cancer for postoperative radiotherapy and achieved a DSC of 0.82, 0.81, 0.82, 0.80, 0.80, 0.81, and 0.82, respectively. Similarly, Ju et al. reported a DSC of 0.82 using a Dense V‐Net for the CTV delineation for cervix cancer radiotherapy. However, the DSC of our models in this study is not as good as those of CNNs CNNs in Rhee et al., 3D CNN in Wang et al., and 2.5 CNN networks (DpnU‐Net) in Liu with a reported DSC around 0.86 for CTV of cervical cancer. This indicated that there is a potential improvement of our adapted 2D and 3D RefineNet. Other factors that may affect the contouring accuracy need further investigation, such as image and manual contour quality. Volume definition of OARs is a prerequisite for meaningful 3D treatment planning and for accurate dose reporting. Studies reported that the deep learning algorithm was superior to the other state‐of‐the‐art segmentation methods and commercially available software in the automatic segmentation of OARs, such as rectum and parotid. In this study, both the 2D and 3D models demonstrated a good performance in automatic segmentation for bladder, right and left femoral heads. 3D models performed a bit better than 2D models in small intestine and rectum with a mean DSC of 0.90 versus 0.95, 0.88 versus 0.91, respectively, as shown in Tables 2 and 3. As the RefineNetPlus3D developed in this study employed more high‐level feature extraction hidden layers by using RCU, CRP, and Fusion modules to aggregate contextual features, it improved the recognition of the unclear boundaries of some parts of the rectum and the small intestine. Generally, automatic segmentation models performed better in bladder and femoral heads with DSC higher than 0.97, which has obvious contour boundaries. The relatively poor performance of these models in rectum may be due to their small volume and unclear outlines. Similarly, Elguindi et al. reported a DSC of 0.93 ± 0.04 and 0.82 ± 0.05 for bladder and rectum, respectively, using a two‐dimensional FCN and DeepLabV3+ with MRI images. Balagopal et al. also presented a similar DSC of bladder (0.95) and rectum (0.84) with deep learning–based auto‐segmentation. Saving the contouring time of radiation oncologists is an inherent product of automatic segmentation of the CTV and OARs. The average manual CTV and OAR contouring time for one cervical cancer patient was 90–120 min. In this study, the proposed algorithms took only half the computation time spent when using U‐Net under the same computer configuration. Moreover, the contouring time was only 4 s for 2D RefineNet and around 6 s for RefineNetPlus3D, respectively. On the other hand, the current results in cervical CTV and OAR contouring demonstrate that RefineNetPlus3D is able to learn high‐level semantic features well, and this method may also have the potential to be used for volume delineations in other cancers; we will explore this possibility in future studies. The model analysis in this study was based on the whole image for segmentation prediction, not just focusing on the target area, which makes an automatic segmentation of CTV for cervical cancer more challenging. Images without target volumes acted as negative samples during modeling and affected the accuracy of the models. A good balance between positive and negative samples may further improve the performance of the models. It would also be a good exploring direction to improve the 2D and 3D models when more data were collected.

CONCLUSIONS

Deep learning–based automatic segmentation is critical for the accuracy and efficiency of radiotherapy. The newly adapted RefineNet and developed RefineNetPlus3D in this study demonstrated that it is able to learn high‐level semantic features and achieve accurate and clinically acceptable CTV and OARs automatic segmentation for cervical cancer patients in postoperative radiotherapy. The RefineNetPlus3D may also be promising for volume delineations for other cancers, which will be investigated in our future studies.

CONFLICT OF INTEREST

The authors declare there is no conflict of interest.

AUTHOR CONTRIBUTIONS

Conception and design: Congying Xie and Xiance Jin. Administrative support: Xiance Jin. Provision of study materials or patients: Chengjian Xiao and Juebin Jin. Collection and assembly of data: Chengjian Xiao and Juebin Jin. Data analysis and interpretation: Jinling Yi, Ce Han, Yongqiang Zhou, and Yao Ai. Manuscript writing: Chengjian Xiao, Juebin Jin, Congying Xie, and Xiance Jin. Final approval of manuscript: Congying Xie and Xiance Jin. All authors contributed to the article and approved the submitted version.
  29 in total

1.  Performance measure characterization for evaluating neuroimage segmentation algorithms.

Authors:  Herng-Hua Chang; Audrey H Zhuang; Daniel J Valentino; Woei-Chyn Chu
Journal:  Neuroimage       Date:  2009-04-05       Impact factor: 6.556

2.  CE-Net: Context Encoder Network for 2D Medical Image Segmentation.

Authors:  Zaiwang Gu; Jun Cheng; Huazhu Fu; Kang Zhou; Huaying Hao; Yitian Zhao; Tianyang Zhang; Shenghua Gao; Jiang Liu
Journal:  IEEE Trans Med Imaging       Date:  2019-03-07       Impact factor: 10.048

Review 3.  Advances in Auto-Segmentation.

Authors:  Carlos E Cardenas; Jinzhong Yang; Brian M Anderson; Laurence E Court; Kristy B Brock
Journal:  Semin Radiat Oncol       Date:  2019-07       Impact factor: 5.934

4.  Fully automated organ segmentation in male pelvic CT images.

Authors:  Anjali Balagopal; Samaneh Kazemifar; Dan Nguyen; Mu-Han Lin; Raquibul Hannan; Amir Owrangi; Steve Jiang
Journal:  Phys Med Biol       Date:  2018-12-14       Impact factor: 3.609

5.  Development and validation of a deep learning algorithm for auto-delineation of clinical target volume and organs at risk in cervical cancer radiotherapy.

Authors:  Zhikai Liu; Xia Liu; Hui Guan; Hongan Zhen; Yuliang Sun; Qi Chen; Yu Chen; Shaobin Wang; Jie Qiu
Journal:  Radiother Oncol       Date:  2020-10-08       Impact factor: 6.280

6.  Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks.

Authors:  Bulat Ibragimov; Lei Xing
Journal:  Med Phys       Date:  2017-02       Impact factor: 4.071

7.  Dosimetric benefits of intensity-modulated radiotherapy and volumetric-modulated arc therapy in the treatment of postoperative cervical cancer patients.

Authors:  Xia Deng; Ce Han; Shan Chen; Congying Xie; Jinling Yi; Yongqiang Zhou; Xiaomin Zheng; Zhenxiang Deng; Xiance Jin
Journal:  J Appl Clin Med Phys       Date:  2016-11-21       Impact factor: 2.102

8.  Deep learning-based auto-segmentation of targets and organs-at-risk for magnetic resonance imaging only planning of prostate radiotherapy.

Authors:  Sharif Elguindi; Michael J Zelefsky; Jue Jiang; Harini Veeraraghavan; Joseph O Deasy; Margie A Hunt; Neelam Tyagi
Journal:  Phys Imaging Radiat Oncol       Date:  2019-12-12

9.  CT based automatic clinical target volume delineation using a dense-fully connected convolution network for cervical Cancer radiation therapy.

Authors:  Zhongjian Ju; Wen Guo; Shanshan Gu; Jin Zhou; Wei Yang; Xiaohu Cong; Xiangkun Dai; Hong Quan; Jie Liu; Baolin Qu; Guocai Liu
Journal:  BMC Cancer       Date:  2021-03-08       Impact factor: 4.430

10.  Automatic contouring system for cervical cancer using convolutional neural networks.

Authors:  Dong Joo Rhee; Anuja Jhingran; Bastien Rigaud; Tucker Netherton; Carlos E Cardenas; Lifei Zhang; Sastry Vedam; Stephen Kry; Kristy K Brock; William Shaw; Frederika O'Reilly; Jeannette Parkes; Hester Burger; Nazia Fakie; Chris Trauernicht; Hannah Simonds; Laurence E Court
Journal:  Med Phys       Date:  2020-10-09       Impact factor: 4.071

View more
  1 in total

1.  RefineNet-based 2D and 3D automatic segmentations for clinical target volume and organs at risks for patients with cervical cancer in postoperative radiotherapy.

Authors:  Chengjian Xiao; Juebin Jin; Jinling Yi; Ce Han; Yongqiang Zhou; Yao Ai; Congying Xie; Xiance Jin
Journal:  J Appl Clin Med Phys       Date:  2022-05-09       Impact factor: 2.243

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.