| Literature DB >> 32154168 |
Yufeng Ye1,2, Zongyou Cai3,4, Bin Huang3,4, Yan He5,6, Ping Zeng7, Guorong Zou5,6, Wei Deng1,2, Hanwei Chen1,2, Bingsheng Huang2,3.
Abstract
In this study, we proposed an automated method based on convolutional neural network (CNN) for nasopharyngeal carcinoma (NPC) segmentation on dual-sequence magnetic resonance imaging (MRI). T1-weighted (T1W) and T2-weighted (T2W) MRI images were collected from 44 NPC patients. We developed a dense connectivity embedding U-net (DEU) and trained the network based on the two-dimensional dual-sequence MRI images in the training dataset and applied post-processing to remove the false positive results. In order to justify the effectiveness of dual-sequence MRI images, we performed an experiment with different inputs in eight randomly selected patients. We evaluated DEU's performance by using a 10-fold cross-validation strategy and compared the results with the previous studies. The Dice similarity coefficient (DSC) of the method using only T1W, only T2W and dual-sequence of 10-fold cross-validation as different inputs were 0.620 ± 0.0642, 0.642 ± 0.118 and 0.721 ± 0.036, respectively. The median DSC in 10-fold cross-validation experiment with DEU was 0.735. The average DSC of seven external subjects was 0.87. To summarize, we successfully proposed and verified a fully automatic NPC segmentation method based on DEU and dual-sequence MRI images with accurate and stable performance. If further verified, our proposed method would be of use in clinical practice of NPC.Entities:
Keywords: convolutional neural networks; dual-sequence; magnetic resonance image; nasopharyngeal carcinoma; segmentation
Year: 2020 PMID: 32154168 PMCID: PMC7045897 DOI: 10.3389/fonc.2020.00166
Source DB: PubMed Journal: Front Oncol ISSN: 2234-943X Impact factor: 6.244
Figure 1Architecture of the proposed CNN model. N × N × C, N is the size of feature map and C is the number of feature maps. N × N Conv, the convolutional layer with N × N kernel size; K = N, N is the growth filters number; N × N Average pooling, the average pooling layer with N × N kernel size; Concate layer, the concatenation layer; ReLU, rectified linear unit.
Comparisons of segmentation performance between different MRI sequences using 10-fold cross-validation strategy.
| T1W | 0.620±0.064 | 0.642±0.070 | 0.654±0.072 |
| T2W | 0.642±0.118 | 0.654±0.115 | 0.688±0.146 |
| T1W+T2W | 0.721±0.036 | 0.712±0.045 | 0.768±0.045 |
DSC, Dice similarity coefficient.
Figure 2An example of the segmentation results with T1W only, T2W only and dual-sequence images. (A) T1W image. (B) Automatic segmentation result with T1W image only (green line) and gold standard (red line) presented on the T1W image. Part of the lesion presented lower signal intensity in T1W image (arrow). (C)Automatic segmentation result with dual-sequence images (blue line) and gold standard (red line) presented on the T1W image. (D) T2W image. (E) Automatic segmentation result with T2W only image (yellow line) and gold standard (red line) presented on the T2W image. Some normal tissue beside the tumor presented high signal intensity as compared with the surrounding tissue (arrow). (F) Automatic segmentation result with dual-sequence images (blue line) and gold standard (red line) presented on the T2W image.
Figure 3Two typical examples of NPC segmentation with low accuracy. The Dice similarity coefficient (DSC) of first row and second row are 0.610 and 0.467, respectively. (A,C) Automatic segmentation result with dual-sequence images (green line) and gold standard (red line) presented on the T1W image. (B,D) Automatic segmentation result with dual-sequence images (green line) and gold standard (red line) presented on the T2W image.
Comparisons of segmentation performance between our proposed CNN model and the similar studies.
| Deng et al. ( | SVM | DCE-MRI | 0.862 | 120 | Contrast Media and Molecular Imaging, 2018 |
| Song et al. ( | Graph-based cosegmentation | PET | 0.761 | 2 | IEEE Transactions on Medical Imaging, 2013 |
| Yang et al. ( | MRFs | PET, CT, MRI | 0.740 | 22 | Medical Physics, 2015 |
| Stefano et al. ( | AK-RW | PET | 0.848 | 18 | Medical and Biological Engineering and Computing, 2017 |
| Wang et al. ( | CNN | MRI | 0.725 | 15 | Neural Processing Letters, 2018 |
| Ma et al. ( | CNNs+3D graph cut | MRI | 0.851 | 30 | Experimental and Therapeutic Medicine, 2018 |
| Men et al. ( | DDNN | CT | 0.716 | 230 | Frontiers in Oncology, 2017 |
| Li et al. ( | CNN | CE-MRI | 0.890 | 29 | Biomed Research International, 2018 |
| Huang et al. ( | CNN | PET-CT | 0.736 | 22 | Contrast Media and Molecular Imaging, 2018 |
| Ma et al. ( | C-CNN | CT-MRI | 0.746 | 90 | Physics in Medicine and Biology, 2019 |
| Proposed method | CNN | Dual-sequence MRI | 0.721 | 44 | – |
DSC, Dice similarity coefficient;
SVM, support vector machine;
DCE-MRI, dynamic contrast-enhanced magnetic resonance imaging;
MRFs, Markov random fields;
AK-RW, adaptive random walker with k-means;
CNN, convolutional neural network;
DDNN, deep deconvolutional neural network;
C-CNN, combined convolutional neural network.