Literature DB >> 31736433

The Tumor Target Segmentation of Nasopharyngeal Cancer in CT Images Based on Deep Learning Methods.

Shihao Li1, Jianghong Xiao2, Ling He3, Xingchen Peng3, Xuedong Yuan4.   

Abstract

Radiotherapy is the main treatment strategy for nasopharyngeal carcinoma. A major factor affecting radiotherapy outcome is the accuracy of target delineation. Target delineation is time-consuming, and the results can vary depending on the experience of the oncologist. Using deep learning methods to automate target delineation may increase its efficiency. We used a modified deep learning model called U-Net to automatically segment and delineate tumor targets in patients with nasopharyngeal carcinoma. Patients were randomly divided into a training set (302 patients), validation set (100 patients), and test set (100 patients). The U-Net model was trained using labeled computed tomography images from the training set. The U-Net was able to delineate nasopharyngeal carcinoma tumors with an overall dice similarity coefficient of 65.86% for lymph nodes and 74.00% for primary tumor, with respective Hausdorff distances of 32.10 and 12.85 mm. Delineation accuracy decreased with increasing cancer stage. Automatic delineation took approximately 2.6 hours, compared to 3 hours, using an entirely manual procedure. Deep learning models can therefore improve accuracy, consistency, and efficiency of target delineation in T stage, but additional physician input may be required for lymph nodes.

Entities:  

Keywords:  automatic delineation; deep learning; nasopharyngeal cancer

Mesh:

Year:  2019        PMID: 31736433      PMCID: PMC6862777          DOI: 10.1177/1533033819884561

Source DB:  PubMed          Journal:  Technol Cancer Res Treat        ISSN: 1533-0338


Introduction

Nasopharyngeal carcinoma (NPC) is one of the most common cancers in the nasopharynx. In 2015, an estimated 833 019 new cases of NPC and 468 745 deaths due to NPC were reported in China alone.[1] The main treatment strategy for NPC is radiotherapy, which has a 5-year survival rate of about 80%, with or without chemotherapy.[2] The most important factor for precise and effective radiotherapy in patients with NPC is accurate target delineation. However, accurate target delineation is time-consuming: Manual target delineation of a single head and neck tumor typically requires 2.7 hours, while delineation of tumor volume and adjacent normal tissues in NPC requires more than 3 hours.[3,4] In fact, delineation must be repeated many times when treating locally advanced NPC due to tumor volume shrinkage and anatomical changes during treatment. Target delineation accuracy is also strongly dependent on the training and experience of the radiation oncologist and can vary widely.[5-7] Therefore, it would be useful to develop a fully automatic delineation method to improve the consistency and accuracy of delineation, as well as to relieve the workload for doctors. Deep learning is a method of machine learning based on artificial neural networks. Deep learning methods have been shown to perform better than traditional machine learning algorithms in many computer vision tasks, especially object detection in images, regression prediction, and semantic segmentation.[8-10] Convolutional neural network (CNN) is a deep learning model with the ability to learn from labeled data, and it has shown impressive accuracy in prediction and detection in medical applications.[11-15] For example, multiple-instance learning using chest X-ray images can detect tuberculosis with an area under the curve of 0.86.[13] An alternating decision tree model using data from structural imaging, age, and scores on the Mini-Mental State Examination predicted treatment response in patients with late-life depression with 89% accuracy.[15] Convolutional neural networks have also been used to segment organs and substructures during targeted treatments. A deep learning model based on CNNs has been used to segment liver images and optimize surface evolution, showing a dice similarity coefficient (DSC) of nearly 97%.[12] A modified U-shaped CNN (U-Net) was used to segment retina thickness and yielded a mean DSC of 95.4% ± 4.6%.[16] Segmentation based on deep learning has also been used in treating pulmonary nodules, liver metastases, and pancreatic cancer.[17-19] Studies show that CNNs can be useful for delineating tumor targets for radiotherapy in brain, rectal, and breast cancer.[20-24] A deep learning model called DeepMedic was used to segment brain tumors with a DSC of 91.4%.[20] Another study using U-Net to segment brain tumors achieved a DSC of 86%.[21] Deep learning models have also been used in rectal cancer to accurately delineate the clinical target volume (CTV), organs at risk, and the target tumor with a DSC of 78% to 87%.[22,24] A CNN model called DD-ResNet was developed to delineate CTVs for breast cancer radiotherapy using big data and was shown to perform better than other deep learning methods, with a DSC of 91%.[23] Given the above studies, we reasoned that CNN may also be useful for delineating NPC targets for radiotherapy. However, segmentation of NPC is more complex than other tumor types because of ambiguous and blurred boundaries between the tumor and normal tissues. Here, our study was novel with 4 main contributions. First, few literature reported on delineation of primary tumor and lymph nodes in planning computed tomography (CT) images for radiotherapy with CNNs, especially considering the large data set used. Thus, we used a modified version of U-Net to segment CT images from 502 patients with NPC and delineate radiotherapy targets.[25] Second, the performance of deep learning model was observed from early stage to advanced stage. Both DSC and Hausdorff distance (HD) value were shown in different stage and demonstrated a downward trend from early stage to advanced stage for both primary tumor and lymph node. Third, normalization technique was used in preprocess the input data for CT images. It could improve the accuracy of target volume delineation on segmentation of NPC, using deep learning methods. Finally, deep learning model can be used to delineate the nasopharynx gross tumor volume (GTVnx) with high accuracy. But in the delineation of the lymph node gross tumor volume (GTVnd), it needs to be intervened by experts, especially in N3 patients.

Materials and Methods

Data Sets

All experimental procedures involving human CT images were approved by the West China Hospital Ethics Committee. CT images were obtained from 502 patients with NPC admitted to the hospital over a period of 5 years. The patients were randomly divided into 3 groups: a training set (302 patients), validation set (100 patients), and testing set (100 patients). Tumor clinical stage was determined according to the American Joint Committee on Cancer (AJCC) staging system (seventh edition). Demographic data are shown in Table 1. There was no difference in the relative proportions of primary tumors (T stage) or lymph node (N stage) among the training, validation, and testing sets.
Table 1.

Baseline Characteristics of the 502 NPC Patients.a

CharacteristicsTraining Set (%)Validation Set (%)Testing Set (%)
n = 302n = 100n = 100
Median age (range)46.9 (18-73)52.3 (12-67)50.7 (25-72)
Sex
 Male195 (64.6%)73 (73%)69 (69%)
 Female107 (35.4%)27 (27%)31 (31%)
T classification
 T173 (24.2%)23 (23%)18 (18%)
 T276 (25.2%)25 (25%)32 (32%)
 T3100 (33.1%)33 (33%)20 (20%)
 T453 (17.5%)19 (19%)40 (40%)
N classification
 N039 (12.9%)13 (13%)15 (15%)
 N198 (32.5%)33 (33%)29 (29%)
 N2155 (51.3%)52 (52%)43 (43%)
 N310 (3.3%)2 (2%)13 (13%)
Overall stage
 I20 (6.6%)6 (6%)4 (4%)
 II36 (11.9%)12 (12%)15 (15%)
 III97 (32.1%)34 (34%)34 (34%)
 IV149 (49.3%)48 (48%)47 (47%)

Abbreviation: NPC, nasopharyngeal carcinoma.

a Tumor and lymph node stage were judged by the seventh edition of the American Joint Committee on Cancer (AJCC) stage criteria.

Baseline Characteristics of the 502 NPC Patients.a Abbreviation: NPC, nasopharyngeal carcinoma. a Tumor and lymph node stage were judged by the seventh edition of the American Joint Committee on Cancer (AJCC) stage criteria. In total, 20 676 CT slices were collected from 502 CT scans. The number of CT slices was 13 310 slices for training set, 3673 slices for validation set, and 3693 slices for testing set. Computed tomography slices were extracted from Digital Imaging and Communications in Medicine (DICOM) files, with the image resolution of 512 × 512 and slice thickness of 3 mm. The gray levels converted by HU value from the DICOM files ranged from 0 to 3071. The target regions on CT slices were independently determined by 2 senior radiation oncologists and labeled nasopharyngeal primary tumor target or metastatic lymph node target.

Preprocessing

To make the image more suitable for segmentation, it needs to be preprocessed with the following steps: The determination of region of interest (ROI): An original DICOM image has the data size of 512 × 512. In the training phase, the original CT image can cause heavy computing workload, due to large useless regions. The subregion has included the target and main anatomical structure in CT image. Therefore, a 224 × 224 region was cropped from original CT image as the ROI. The cropping operation is shown in Figure 1.
Figure 1.

Cropping the region of interest.

Computed tomography image normalization: In deep learning experiment, different CT scans equipment may have different configuration. According to eliminate differences, normalizing operations were deployed in these CT images and the formula was as follows: Cropping the region of interest. where Pixel is the source pixel data in CT images; Pixelnorm is normalized CT images pixel data; and Pixelmin and Pixelmax are the minimum and maximum gray value of source CT images, respectively.

Deep Learning Model for Delineation

In the field of image segmentation, CNNs showed researchers excellent performance on image segmentation tasks. Fully convolutional neural network (FCN) is the first CNN algorithm model in image segmentation using deconvolution layers.[8] The CT image was decoded to an input patch of 224 × 224 matrix. The FCN architecture can predict an output with an interesting region image. A more elegant FCN model was named U-Net, which extract a large number of feature channels in the upsampling part. However, in the segmentation of nasopharyngeal CT image, FCN and U-Net only predict a lower spatial resolution than source CT image. The resolution of output images cannot be used here because of unsuitable output size. Therefore, a modified version of the U-Net model was proposed, in which downsampling layers and upsampling layers have similar learning ability. Each convolution layer is a convolution operation with padding followed by a batch normalization and Rectified Linear Unit activation function.[26] The output feature maps of each convolution layer are the same input feature maps in the whole of this model. In downsampling path, the input CT image with the size of 224 × 224 was downsampled the spatial dimension to 14 × 14. Conversely, the upsampling path upsampled the feature maps from 14 × 14 to 224 × 224. The upsampling layers concatenated output feature maps with feature maps of downsampling layers. The network diagram is shown in Figure 2. The number of kernels about this model has shown in each output of convolutional layer.
Figure 2.

U-Net architecture.

U-Net architecture. U-Net model is implemented by Google TensorFlow framework, which is a famous machine learning library, and then accelerated by NVIDIA@ Compute Unified Device Architecture.[27,28] The CT image data set was divided into training set, validation set, and testing set. The training set (302 patients) was used to optimize the parameters of the U-Net. The original 2-dimensional CT images were the inputs and the corresponding segmentation probability maps about the GTVnx and GTVnd were the outputs. The validation set was used to tune the deep learning model in training phase. The testing set was divided into T1, T2, T3, T4, N1, N2, and N3, according to seventh edition of AJCC for NPC.[29] After data preprocessing, the U-Net architecture was defined in TensorFlow Machine Learning Library using Python Application Programmable Interface. Because of overfitting, the Dropout was set in every convolution layer.[30] The initialization of parameters was configured by Xavier function and truncated normal distribution whose standard deviation is 0.1.[31] In the phase of model training, the learning rate was set to 0.01 in Adam optimizer.[32] The number of iterations was 40 with cross-entropy descent of the whole validation set.

Evaluation of Deep Learning Model

The CT images from the test set were used to evaluate the predictive performance of the U-Net model. The loss values were recorded for each patient in the validation set. The loss value illustrates how the model was trained in the training phase. U-Net performance was evaluated using the DSC value and HD, which quantify the results of GTVnx and GTVnd. The values of DSC were defined in Equation 2 as follows: where P denotes the segmented area for prediction, L denotes the segmented area for reference, and is the intersection of the 2 areas. DSC value is defined between 0 and 1, with 0 representing predicting miss and 1 shows that the predicting result is perfect. Hausdorff distance (HD) was defined in Equation 3 as follows: where and are 2 finite point sets, and where is the normal form on the points of P and L (ie, the L 2 or Euclidean norm). describes the point that is farthest from any point of L and calculates the distance from p to its nearest neighbor in L. The HD is the maximum of and and expresses the largest degree of mismatch between P and L. The overlap between P and L increases with smaller .

Results

U-Net Model Training

The cross-entropy loss function is used to observe U-Net in the training situations. The loss value decreased with epoch number. After 20 epochs, the decrease in loss value slowed and the U-Net model stabilized. This was observed in both the training and validation set. Experiments were carried out on dual Intel Xeon E5-2643 v4 (3.4 GHz) and dual NVIDIA tesla K40m graphics card. The validation set, 100 of 502 patients, was deployed to evaluate the predicting performance of U-Net model. The reference segmentation maps labeled by the experienced radiation oncologists were used to calculate loss value of target function.

Delineation Results by U-Net Model in Testing Set

After training our model, we used CT images from the test set to perform target delineation. The DSC and HD results for GTVnx and GTVnd in the test set are summarized in Figure 3 and Table 2. The average normalized U-Net DSC values by T stage were 77.24% (T1), 75.38% (T2), 74.13% (T3), and 71.42% (T4), with an overall DSC of 74.00%. The average normalized HD values were 10.36 mm (T1), 11.37 mm (T2), 11.90 mm (T3), and 15.72 mm (T4), with an overall HD of 12.85 mm (Figure 3 and Table 2). The DSC and HD values were higher for T stage than N stage. The average normalized DSC values for N stage were 69.07% (N1), 65.32% (N2), and 64.03% (N3), with an overall DSC of 65.86% (Figure 4 and Table 2). The average normalized HD values for N stage were 31.08 mm (N1), 32.12 mm (N2), and 34.99 mm (N3), with an overall HD of 32.10 mm.
Figure 3.

Target delineation in T stage of NPC by U-Net model. A. Representative pictures from manual delineation and U-Net. The target region is shown in orange. B.DSC value in different T stage.

Table 2.

The DSC and HD Values for GTVnx and GTVnd Segmentation.

Evaluation MetricsPrimary Tumor StageLymph Nodes Stage
T1T2T3T4OverallN1N2N3Overall
DSC-norm (%)77.2475.3874.1371.4274.0069.0765.3264.0365.86
DSC (%)76.5873.1871.4968.8071.7865.6459.8759.4261.05
HD-norm (mm)10.3611.3711.9015.7212.8531.0832.1234.9932.10
HD (mm)10.4314.1012.3717.0214.2434.3733.3342.9836.15

Abbreviations: DSC, dice similarity coefficient; DSC-norm, dice similarity coefficient with normalization; GTVnd, lymph node gross tumor volume; GTVnx, nasopharynx gross tumor volume; HD, Hausdorff distance; HD-norm, Hausdorff distance with normalization.

Figure 4.

Target delineation in N stage. A, Representative computed tomography scans showing the results of manual delineation and automated delineation using U-Net. The target region is shown in cyan. B, Normalized U-Net dice similarity coefficients by N stage.

Target delineation in T stage of NPC by U-Net model. A. Representative pictures from manual delineation and U-Net. The target region is shown in orange. B.DSC value in different T stage. Target delineation in N stage. A, Representative computed tomography scans showing the results of manual delineation and automated delineation using U-Net. The target region is shown in cyan. B, Normalized U-Net dice similarity coefficients by N stage. The DSC and HD Values for GTVnx and GTVnd Segmentation. Abbreviations: DSC, dice similarity coefficient; DSC-norm, dice similarity coefficient with normalization; GTVnd, lymph node gross tumor volume; GTVnx, nasopharynx gross tumor volume; HD, Hausdorff distance; HD-norm, Hausdorff distance with normalization. There was good overlap in DSC and HD values for GTVnx between autosegmented contours and manual contours obtained by physicians. However, autosegmented contours did not show a good match in the lymph nodes, especially in patients with N3 lymph nodes. We also performed U-Net delineation without normalization to test the impact of normalization. The DSC and HD values without normalization were lower than those after normalization (Table 2).

Time Cost

The time needed to train the U-Net model was about 18 hours using a DELL R730 server with dual Intel Xeon E5-2643 v4 (3.4 GHz) and dual NVIDIA tesla K40m graphics cards. The average time for automatic delineation of GTVnx and GTVnd with U-Net was about 40 seconds per patient. U-Net-assisted delineation required an average of 2.6 hours per patient, in contrast to manual delineation which required an average of 3 hours per patient. The comparison of 10 physicians was shown in Figure 5 between U-Net-assisted delineation and manual delineation.
Figure 5.

Comparison of total delineation time per patient for 10 physicians using manual delineation and U-Net-assisted delineation.

Comparison of total delineation time per patient for 10 physicians using manual delineation and U-Net-assisted delineation.

Discussion

Accurate target delineation is the most important step for precise and effective radiotherapy in patients with NPC, but is time-consuming and varies with the experience of the oncologist. In recent years, automatic target delineation using deep learning algorithms has been increasingly used by radiation oncologists. In this study, we used a modified version of a deep learning algorithm called U-net to automate segmentation and delineation of NPC tumors for radiotherapy. We show that U-Net is able to delineate NPC tumors with high accuracy and reduces the delineation time requirement for physicians. Many studies reported that deep learning model can segment those obvious and clear tumors, such as lung cancer, hepatoma, and so on. But the contour and anatomical structure of NPC is more complex than other types of tumors. In this article, a modified deep learning model for automatic tumor target segment of nasopharyngeal cancer is carried out. On the one hand, our data showed that deep learning model had a better delineation accuracy in early stage, compared with advanced stage. Moreover, our deep learning model demonstrated lower DSC value and HD value in GTVnd, compared with GTVnx. Therefore, professional intervention is required because of unsatisfied delineation accuracy. On the other hand, considering impact of normalization technique, another U-Net without normalization was trained by same data set in order to compare with current results. The experimental results indicated that normalization technique could improve the delineation accuracy of deep learning model. The main reason is that original gray level values bring larger errors than normalized data after the model calculation in single precision floating point representation. Consistency of target delineation is a key factor affecting clinical outcomes in patients with NPC. A study in which several oncologists manually delineated identical GTV contours of supraglottic carcinoma reported an interobserver overlap of only 53%.[33] A comparison of CTV delineation among different radiation oncologists reported a DSC value of only 75%.[34] Deep learning methods have been reported to perform better than other methods in many automatic delineation applications. Previous studies using nondeep learning methods reported mean DSC values of 60% to 80% for CTV delineation, whereas automatic delineation based on deep learning gave a mean DSC value of 82.6%.[35-40] However, few studies have investigated their use in delineating complex targets such as NPC. Previous studies using nondeep learning methods reported DSC values of 69% and 75% for head and neck cancer, respectively.[41,42] Delineation of lymph node is especially difficult: Studies show that the DSC value of lymph node delineation using atlas-based methods was only 46% in unilateral tonsil cancers.[43] In comparison, we found that U-Net produced DSC values of 65.86% for overall N stage and 74.00% for overall T stage in patients with NPC. Previous studies did not differentiate delineation of primary tumor stage or lymph nodes stage. We found that U-Net produced higher DSC values in T stage than N stage and that the DSC value decreased in more advanced cancer. Moreover, we respectively analyzed the performance of U-Net model in different primary tumor stage (T1-T4) and lymph nodes stage (N1-N3), which helped us to find weakness of U-Net model in different stages. Our study has several advantages compared to previous work. First, Sun et al investigated that the performance of deep learning models can be improved by increases of the data.[44] The size of database used by most previous studies was smaller than ours. Not only that, but to ensure data set quality, manual target delineation was drawn by 2 radiation oncologists separately, who were trained according to the same professional guideline.[45] Both of them have more than 15 years of experience in caring for patients with NPC. After that, 2 radiation radiologists were required to approve the contour drawn by each other. If inconsistent samples were identified, the third radiologist specializing in NPC imaging would consult in cases of disagreement. During the consultation, the third radiologist was required to discuss and reach an agreement with the 2 radiologists. To a limited extent, the inter-/intraoperator variability is addressed in our study. Second, we chose not to use data augmentation in our study, although this can enhance the performance of deep learning models, because we wished to investigate the performance of a deep learning model trained by a large, manually labeled data set curated by experienced radiation oncologists. As a result, we found clear evidence that deep learning methods show different accuracy of target delineation depending on cancer stage. Future work should examine the impact of including data augmentation on automatic delineation. Third, low-contrast visibility and high noise levels usually lead to ambiguous and blurred boundaries between GTVnx, GTVnd, and normal tissues in CT images. Variation in contrast among different slices may affect the robustness of the model. By comparing delineation with or without normalization, we clearly show that normalization improved U-Net performance.

Conclusion

We show that a modified U-Net model can delineate NPC tumor targets with higher consistency and efficiency than manual delineation, as well as reduce the amount of time required per patient. Delineation accuracy was better in early stage than advanced stage and better in primary tumor than in lymph nodes. The U-Net may be a useful tool for relieving physician workload and improving treatment outcomes in NPC.
  29 in total

1.  Analysis of treatment planning time among systems and planners for intensity-modulated radiation therapy.

Authors:  Indra J Das; Vadim Moskvin; Peter A Johnstone
Journal:  J Am Coll Radiol       Date:  2009-07       Impact factor: 5.532

2.  Intensity-modulated radiation therapy with or without chemotherapy for nasopharyngeal carcinoma: radiation therapy oncology group phase II trial 0225.

Authors:  Nancy Lee; Jonathan Harris; Adam S Garden; William Straube; Bonnie Glisson; Ping Xia; Walter Bosch; William H Morrison; Jeanne Quivey; Wade Thorstad; Christopher Jones; K Kian Ang
Journal:  J Clin Oncol       Date:  2009-06-29       Impact factor: 44.544

3.  Machine learning approaches for integrating clinical and imaging features in late-life depression classification and response prediction.

Authors:  Meenal J Patel; Carmen Andreescu; Julie C Price; Kathryn L Edelman; Charles F Reynolds; Howard J Aizenstein
Journal:  Int J Geriatr Psychiatry       Date:  2015-02-17       Impact factor: 3.485

4.  Emphasizing conformal avoidance versus target definition for IMRT planning in head-and-neck cancer.

Authors:  Paul M Harari; Shiyu Song; Wolfgang A Tomé
Journal:  Int J Radiat Oncol Biol Phys       Date:  2010-04-06       Impact factor: 7.038

5.  Evaluation of automatic atlas-based lymph node segmentation for head-and-neck cancer.

Authors:  Liza J Stapleford; Joshua D Lawson; Charles Perkins; Scott Edelman; Lawrence Davis; Mark W McDonald; Anthony Waller; Eduard Schreibmann; Tim Fox
Journal:  Int J Radiat Oncol Biol Phys       Date:  2010-03-16       Impact factor: 7.038

6.  Dermatologist-level classification of skin cancer with deep neural networks.

Authors:  Andre Esteva; Brett Kuprel; Roberto A Novoa; Justin Ko; Susan M Swetter; Helen M Blau; Sebastian Thrun
Journal:  Nature       Date:  2017-01-25       Impact factor: 49.962

Review 7.  Multi-atlas segmentation of biomedical images: A survey.

Authors:  Juan Eugenio Iglesias; Mert R Sabuncu
Journal:  Med Image Anal       Date:  2015-07-06       Impact factor: 8.545

8.  Fully automatic and robust segmentation of the clinical target volume for radiotherapy of breast cancer using big data and deep learning.

Authors:  Kuo Men; Tao Zhang; Xinyuan Chen; Bo Chen; Yu Tang; Shulian Wang; Yexiong Li; Jianrong Dai
Journal:  Phys Med       Date:  2018-05-19       Impact factor: 2.685

9.  Quantitative assessment of inter-observer variability in target volume delineation on stereotactic radiotherapy treatment for pituitary adenoma and meningioma near optic tract.

Authors:  Hideya Yamazaki; Hiroya Shiomi; Takuji Tsubokura; Naohiro Kodani; Takuya Nishimura; Norihiro Aibe; Hiroki Udono; Manabu Nishikata; Yoshimi Baba; Mikio Ogita; Koichi Yamashita; Tadayuki Kotsuma
Journal:  Radiat Oncol       Date:  2011-01-27       Impact factor: 3.481

10.  Inter-observer variability of clinical target volume delineation in radiotherapy treatment of pancreatic cancer: a multi-institutional contouring experience.

Authors:  Luciana Caravatta; Gabriella Macchia; Gian Carlo Mattiucci; Aldo Sainato; Nunzia L V Cernusco; Giovanna Mantello; Monica Di Tommaso; Marianna Trignani; Antonino De Paoli; Gianni Boz; Maria L Friso; Vincenzo Fusco; Marta Di Nicola; Alessio G Morganti; Domenico Genovesi
Journal:  Radiat Oncol       Date:  2014-09-08       Impact factor: 3.481

View more
  9 in total

1.  Analytical performance of aPROMISE: automated anatomic contextualization, detection, and quantification of [18F]DCFPyL (PSMA) imaging for standardized reporting.

Authors:  Kerstin Johnsson; Johan Brynolfsson; Hannicka Sahlstedt; Nicholas G Nickols; Matthew Rettig; Stephan Probst; Michael J Morris; Anders Bjartell; Mathias Eiber; Aseem Anand
Journal:  Eur J Nucl Med Mol Imaging       Date:  2021-08-31       Impact factor: 10.057

2.  Radiotherapy Treatment Planning in the Age of AI: Are We Ready Yet?

Authors:  Dandan Zheng; Julian C Hong; Chunhao Wang; Xiaofeng Zhu
Journal:  Technol Cancer Res Treat       Date:  2019 Jan-Dec

3.  A Collaborative Dictionary Learning Model for Nasopharyngeal Carcinoma Segmentation on Multimodalities MR Sequences.

Authors:  Haiyan Wang; Guoqiang Han; Haojiang Li; Guihua Tao; Enhong Zhuo; Lizhi Liu; Hongmin Cai; Yangming Ou
Journal:  Comput Math Methods Med       Date:  2020-08-28       Impact factor: 2.238

4.  Artificial Intelligence for Classifying and Archiving Orthodontic Images.

Authors:  Shihao Li; Zizhao Guo; Jiao Lin; Sancong Ying
Journal:  Biomed Res Int       Date:  2022-01-27       Impact factor: 3.411

5.  AttR2U-Net: A Fully Automated Model for MRI Nasopharyngeal Carcinoma Segmentation Based on Spatial Attention and Residual Recurrent Convolution.

Authors:  Jiajing Zhang; Lin Gu; Guanghui Han; Xiujian Liu
Journal:  Front Oncol       Date:  2022-01-28       Impact factor: 6.244

6.  Multiscale Local Enhancement Deep Convolutional Networks for the Automated 3D Segmentation of Gross Tumor Volumes in Nasopharyngeal Carcinoma: A Multi-Institutional Dataset Study.

Authors:  Geng Yang; Zhenhui Dai; Yiwen Zhang; Lin Zhu; Junwen Tan; Zefeiyun Chen; Bailin Zhang; Chunya Cai; Qiang He; Fei Li; Xuetao Wang; Wei Yang
Journal:  Front Oncol       Date:  2022-03-18       Impact factor: 6.244

7.  Gross Tumor Volume Segmentation for Stage III NSCLC Radiotherapy Using 3D ResSE-Unet.

Authors:  Xinhao Yu; Fu Jin; HuanLi Luo; Qianqian Lei; Yongzhong Wu
Journal:  Technol Cancer Res Treat       Date:  2022 Jan-Dec

8.  Clinical Target Volume Auto-Segmentation of Esophageal Cancer for Radiotherapy After Radical Surgery Based on Deep Learning.

Authors:  Ruifen Cao; Xi Pei; Ning Ge; Chunhou Zheng
Journal:  Technol Cancer Res Treat       Date:  2021 Jan-Dec

Review 9.  Application of Artificial Intelligence for Nasopharyngeal Carcinoma Management - A Systematic Review.

Authors:  Wai Tong Ng; Barton But; Horace C W Choi; Remco de Bree; Anne W M Lee; Victor H F Lee; Fernando López; Antti A Mäkitie; Juan P Rodrigo; Nabil F Saba; Raymond K Y Tsang; Alfio Ferlito
Journal:  Cancer Manag Res       Date:  2022-01-26       Impact factor: 3.989

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.