| Literature DB >> 34019128 |
Evi J van Kempen1, Max Post1, Manoj Mannil2, Richard L Witkam3,4, Mark Ter Laan4, Ajay Patel1, Frederick J A Meijer1, Dylan Henssen5.
Abstract
OBJECTIVES: Different machine learning algorithms (MLAs) for automated segmentation of gliomas have been reported in the literature. Automated segmentation of different tumor characteristics can be of added value for the diagnostic work-up and treatment planning. The purpose of this study was to provide an overview and meta-analysis of different MLA methods.Entities:
Keywords: Glioma; Machine learning; Meta-analysis; Neuroimaging
Mesh:
Year: 2021 PMID: 34019128 PMCID: PMC8589805 DOI: 10.1007/s00330-021-08035-0
Source DB: PubMed Journal: Eur Radiol ISSN: 0938-7994 Impact factor: 5.315
Fig. 1PRISMA flowchart of systematic literature search
Participant demographics, study characteristics, and outcomes of the included studies and performance evaluation of MLAs of the included studies
| Training set | Test set | Reference segmentations | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| First author (year of publication) (reference) | N | Mean age (years) | M-F | N | External|validation | Target condition | Dataset | MR sequences | Summary of DLA methods | 2D vs. 3D | Subgroups | SN | SP | DSC score (± SD) | Data/code openly available? | |
| Kamnitsas et al (2017) [ | 274 | NR | NR | 110 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | 3D CNN with two-scale extracted Features and 3D dense Conditional Random Field as postprocessing | 3D | Whole tumor | 88 | NR | 0.85 | Y/Y |
| Contrast enhancing tumor | 67 | NR | 0.63 | |||||||||||||
| Tumor core | 60 | NR | 0.67 | |||||||||||||
| 80 | NR | NR | 80 | No | HGG and LGG | BraTS 2012 | FLAIR images | BraTS segmentations | A specific region of interest (ROI) that contains tumor was identified and then the intensity non-uniformity in ROI was corrected via the histogram normalization and intensity scaling. Each voxel in ROI was presented using 22 features and then was categorized as tumor or non-tumor by a multiple classifier system | 3D | Simulated data | 84.0 | 98.0 | 0.81 ± 0.10 | Y/N | |
| Real data | 89.0 | 98.0 | 0.80 ± 0.10 | |||||||||||||
| Banerjee et al (2020) [ | 285 | NR | NR | 66 | No | HGG and LGG | BraTS 2018 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Encoder-decoder type CNN model combined with a consensus fusion strategy with a fully connected Conditional random field-based post-refinement | 3D | Whole tumor | 91.4 | 99.3 | 0.902 | Y/Y |
| Contrast enhancing tumor | 86.9 | 99.7 | 0.824 | |||||||||||||
| Central tumor necrosis | 87.4 | 99.7 | 0.872 | |||||||||||||
| Bonte et al (2018) [ | 287 | NR | NR | 285 | Yes | HGG, LGG, and other tumor types (e.g., meningioma, ependymoma ) | BraTS 2013, BraTS 2017, and original data | T1w c+, and FLAIR images | BraTS segmentations | Random Forests model combining voxel-wise texture and abnormality features on 275 feature maps | 3D | LGG – whole tumor | NR | NR | 0.684 | Y/N |
| LGG – tumor core | NR | NR | 0.409 | |||||||||||||
| HGG – whole tumor | NR | NR | 0.801 | |||||||||||||
| HGG- tumor core | NR | NR | 0.750 | |||||||||||||
| 45 | 58.7 | 24–21 | 46 | Yes | HGG | Original data, TCIA data, and TCGA data | T2w images | Manual segmentations made by two experienced radiologists | V-Net model using 3D input and output which uses convolution with a stride of factor 2 instead of max-pooling | 3D | Tumor + peritumoral edema | NR | NR | 0.78 ± 0.14 | Y/Y | |
| Cui et al (2018) [ | 240 | NR | NR | 34 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Fully convolutional network in conjunction with the transfer learning technology combined with a CNN with deeper architecture and smaller kernel to label a defined tumor region into multiple subregions | 2D | Whole tumor | NR | NR | 0.89 | Y/N |
| Hasan et al (2018) [ | 285 | NR | NR | 146 | No | HGG and LGG | BraTS 2017 and original data | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Nearest neighbor re-sampling based elastic-transformed U-net deep CNN framework | 2D | HGG | NR | NR | 0.899 | Y/N |
| LGG | NR | NR | 0.846 | |||||||||||||
| Combined | NR | NR | 0.872 | |||||||||||||
| Havaei et al (2017) [ | 30 | NR | NR | 10 | No | HGG and LGG | BraTS 2013 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | A CNN with two pathways of both local and global information | 3D | Whole tumor | 84 | 88 | 0.840 | Y/N |
| Contrast enhancing tumor | 68 | 54 | 0.570 | |||||||||||||
| Tumor core | 72 | 79 | 0.710 | Y/N | ||||||||||||
| Havaei et al (2016) [ | 30 | NR | NR | 10 | No | HGG and LGG | BraTS 2013 | T2w, T1w c+, and FLAIR images | BraTS segmentations | A cascade neural network architecture in which the output of a basic CNN is treated as an additional source of information for a subsequent CNN | 3D | PKSVM-CRF | 78 | 88 | 0.86 | |
| KSVM-CRF | 82 | 87 | 0.84 | |||||||||||||
| kNN-CRF | 78 | 91 | 0.85 | |||||||||||||
| Hussain et al (2017) [ | 30 | NR | NR | NR | No | HGG and LGG | BraTS 2013 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | deep cascaded convolutional neural networks | 2D | Whole tumor | 82 | 85 | 0.80 | Y/N |
| Contrast enhancing tumor | 57 | 60 | 0.57 | |||||||||||||
| Tumor core | 63 | 82 | 0.67 | |||||||||||||
| Iqbal et al (2019) [ | 274 | NR | NR | 110 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Combination of CNN- and long short-term memory models | 2D | Whole tumor | NR | NR | 0.823 | Y/N |
| Iqbal et al (2018) [ | 274 | NR | NR | 110 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | CNN Model | 2D | SkipNet** | 83 | 73 | 0.87 | Y/N |
| SENet** | 86 | 83 | 0.88 | |||||||||||||
| IntNet** | 86 | 73 | 0.90 | |||||||||||||
| 80 | NR | NR | 23 | No | HGG and LGG | BraTS 2012 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Method exploiting the global classifier (trained by using samples from the population feature set) and a custom classifier (trained by using samples from seed points in the testing image). The outputs of these two classifiers are weighted and then constructed | 3D | Whole tumor | 87.2 | 83.1 | 0.845 ± 0.09 | Y/N | |
| Kao et al (2019) [ | 285 | NR | NR | 66 | No | HGG and LGG | BraTS 2017 and BraTS 2018 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | 3D CNN with two-scale extracted features and 3D dense conditional random field as postprocessing combined with a separate 3D U-Net | 3D | Whole tumor | NR | NR | 0.908 | Y/N |
| Contrast enhancing tumor | NR | NR | 0.782 | |||||||||||||
| Tumor core | NR | NR | 0.823 | |||||||||||||
| Li et al (2017) [ | 59 | NR | NR | 101 | No | LGG | Original data | FLAIR images | Manual segmentations made by two experienced neurosurgeons | 3D CNN with two-scale extracted features and 3d dense conditional random field as postprocessing | 3D | Whole tumor | 88.9 | NR | 0.802 | N/N |
| 200 | NR | NR | 74 | No | HGG and LGG | BraTS 2015 | T1w, T2w, and FLAIR images | BraTS segmentations | 3D patch-based fully convolution network adopting the architecture of V-Net | 3D | Whole tumor | NR | NR | 0.87 ± 0.06 | Y/N | |
| Meng et al (2018) [ | 154 | NR | NR | 22 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Light noise suppression U-network to achieve end-to-end learning without elaborate pre-processing and postprocessing | 2D | Whole tumor | 82 | 74 | 0.89 | Y/N |
| Naceur et al (2018) [ | 285 | NR | NR | NR | No | HGG and LGG | BraTS 2017 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Three end-to-end incremental deep convolutional neural network models | 2D | Whole tumor | 82 | 74 | 0.89 | Y/N |
| Naser et al (2020) [ | 110 | 46 | 54-56 | 110 | No | LGG | TCIA | T1w, T1w c+, and FLAIR images | Manual segmentations made by the investigators | A deep learning approach which combines CNNs based on the U-net for tumor segmentation and transfer learning based on a pre-trained convolution-base of Vgg16 and a fully connected classifier for tumor grading was developed. | 3D | Whole tumor | NR | NR | 0.84 | Y/N |
| * | NR | NR | 64 | Yes | HGG | Original data | T1w, T2w, T1w c+, and FLAIR images | Manual segmentations made by the investigators following the BraTS challenge workflow | 3D CNN with two-scale extracted features and 3D dense Conditional random Field as postprocessing | 3D | Whole tumor | 84 | NR | 0.86 ± 0.09 | Y/N | |
| Contrast enhancing tumor | 78 | NR | 0.78 ± 0.15 | |||||||||||||
| Central tumor necrosis | 57 | NR | 0.62 ± 0.30 | |||||||||||||
| Razzak et al (2019) [ | 285 | NR | NR | 110 | No | HGG and LGG | BraTS 2013 and BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Two-pathway CNN which simultaneously accommodates the global and local features as well as embedding additional transformations like rotations and reflections in itself by applying not only translation but also rotational and reflection to the filters which result in an increase in the degree of weight sharing | 2D | Whole tumor | 88.3 | NR | 0.892 | Y/N |
| Savareh et al (2019) [ | 274 | NR | NR | NR | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Fully convolutional network was selected to implement the wavelet-enhanced fully convolutional network model | 3D | Whole tumor | 93 | 99 | 0.918 | Y/N |
| 11 | 53 | NR | 11 | No | HGG and LGG | BraTS 2013 and original data | T1w, T2w, T1w c+, FLAIR, and DTI images | Segmentations derived from the BraTS dataset combined with manual segmentations made by the investigators following the BraTS challenge workflow | 3D supervoxel-based learning method. Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of tex-ton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first-order intensity statistical features are extracted. Those features are fed into a random forests classifier to classify each supervoxel into tumor core, edema, or healthy brain tissue. | 3D | Whole tumor | NR | NR | 0.84 ± 0.06 | Y/N | |
| Sun et al (2019) [ | 274 | NR | NR | 110 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | 3D CNN-based method | 3D | Whole tumor | 89 | NR | 0.84 | Y/N |
| Contrast enhancing tumor | 69 | NR | 0.62 | |||||||||||||
| Wang et al (2018) [ | 100 | NR | NR | NR | No | HGG and LGG | NR | NR | NR | 3D-CNN Model | 3D | Whole tumor | NR | NR | 0.916 | N/N |
| Wu et al (2020) [ | 285 | NR | NR | 66 | No | HGG and LGG | BraTS 2017 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | 2D U-Nets | 2D | Whole tumor | NR | NR | 0.91 | Y/Y |
| Contrast enhancing tumor | NR | NR | 0.80 | |||||||||||||
| Tumor core | NR | NR | 0.83 | |||||||||||||
| 228 | NR | NR | 57 | No | HGG and LGG | BraTS 2017 | T2w images | BraTS segmentations | An adaptive superpixel generation algorithm based on simple linear iterative clustering version with 0 parameter (ASLIC0) was used to acquire a superpixel image with fewer superpixels and better fit the boundary of ROI by automatically selecting the optimal number of superpixels. | 2D | Whole tumor | 81.5 | 99.6 | 0.849 ± 0.07 | Y/N | |
| Yang et al (2019) [ | 255 | NR | NR | 30 | No | HGG and LGG | BraTS 2017 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | U-net | 2D | Whole tumor | 90.6 | NR | 0.883 ± 0.06 | Y/N |
| Contrast enhancing tumor | 79.2 | NR | 0.784 ± 0.10 | |||||||||||||
| Tumor core | 88.3 | NR | 0.781 ± 0.10 | |||||||||||||
| Yang et al (2019) [ | 274 | NR | NR | 274 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Two-pathway convolutional neural network combined with random forests | 2D | SK-TPCNN – Whole tumor | 95 | NR | 0.86 | Y/N |
| SK-TPCNN – contrast-enhancing tumor | 76 | NR | 0.81 | |||||||||||||
| SK-TPCNN – tumor core | 91 | NR | 0.74 | |||||||||||||
| SK-TPCNN + RF – whole tumor | 96 | NR | 0.89 | |||||||||||||
| SK-TPCNN + RF – contrast-enhancing tumor | 83 | NR | 0.87 | |||||||||||||
| SK-TPCNN + RF – Tumor core | 92 | NR | 0.80 | |||||||||||||
| Yang et al (2020) [ | 274 | NR | NR | 274 | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | 2D-CNN Model | 2D | Whole tumor | 88 | NR | 0.90 | Y/N |
| Contrast enhancing tumor | 84 | NR | 0.88 | |||||||||||||
| Tumor core | 82 | NR | 0.82 | |||||||||||||
| 30 | NR | NR | 30 | No | HGG | BraTS 2012 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Semi-automatic Constrained Markov random field pixel labeling | 3D | HGG | NR | NR | 0.835 ± 0.089 | Y/N | |
| LGG | LGG | NR | NR | 0.848 ± 0.087 | ||||||||||||
| Zhou et al (2020) [ | 285 | NR | NR | 66 | No | HGG and LGG | BraTS 2013, BraTS 2015, and BraTS 2018 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | 3D dense connectivity model | 3D | Whole tumor | NR | NR | 0.864 | Y/N |
| Contrast-enhancing tumor | NR | NR | 0.753 | |||||||||||||
| Tumor core | NR | NR | 0.774 | |||||||||||||
| Zhuge et al (2017) [ | 20 | NR | NR | 10 | Yes | HGG | BraTS 2013 dataset and original data | T1w, T2w, T1w c+, and FLAIR images | Segmentations derived from the BraTS dataset [ | Holistically nested CNN model | 2D | Whole tumor | 85.0 | NR | 0.83 | Y/Y |
| Dong et al (2017) [ | 274 | NR | NR | NR | No | HGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | U-Net based deep convolutional networks | 3D | LGG - whole tumor | NR | NR | 0.84 | Y/N |
| LGG – tumor core | NR | NR | 0.85 | |||||||||||||
| HGG – whole tumor | NR | NR | 0.88 | |||||||||||||
| HGG – contrast-enhancing tumor | NR | NR | 0.81 | |||||||||||||
| HGG – tumor core | NR | NR | 0.87 | |||||||||||||
| 163 | NR | NR | 25 | No | HGG and LGG | BraTS 2013 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Structured prediction was used together with a CNN | 3D | LGG - whole tumor | NR | NR | 0.85 ± 0.06 | Y/N | |
| LGG – tumor core | NR | NR | 0.65 ± 0.15 | |||||||||||||
| HGG – whole tumor | NR | NR | 0.80 ± 0.17 | |||||||||||||
| HGG – contrast-enhancing tumor | NR | NR | 0.81 ± 0.11 | |||||||||||||
| HGG – tumor core | NR | NR | 0.85 ± 0.08 | |||||||||||||
| Lyksborg et al (2015) [ | 91 | NR | NR | 40 | No | HGG and LGG | BraTS 2014 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | An ensemble of 2D CNNs with a three-step volumetric segmentation | 2D | Whole tumor | 82.5 | NR | 0.810 | Y/N |
| Pereira et al (2016) [ | 30 | NR | NR | NR | No | HGG and LGG | BraTS 2013 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | A CNN with small 3 × 3 kernels | 2D | Whole tumor | 86 | NR | 0.88 | Y/Y |
| Pinto et al (2015) [ | 40 | NR | NR | 10 | No | HGG and LGG | BraTS 2013 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Using appearance- and context-based features to feed an extremely randomized forest | 2D | Whole tumor | 82 | NR | 0.83 | Y/N |
| Contrast-enhancing tumor | 79 | NR | 0.73 | |||||||||||||
| Tumor core | 75 | NR | 0.78 | |||||||||||||
| Tustison et al (2015) [ | 30 | NR | NR | 10 | No | HGG and LGG | BraTS 2013 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Combine a random forest model with a framework of regularized probabilistic segmentation | 2D | Whole tumor | 89 | NR | 0.87 | Y/Y |
| Contrast-enhancing tumor | 83 | NR | 0.74 | |||||||||||||
| Tumor core | 88 | NR | 0.78 | |||||||||||||
| Usman and Rajpoot (2017) [ | 30 | NR | NR | NR | No | HGG and LGG | BraTS 2013 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Automated wavelet-based features + a random forest classifier | 3D | Whole tumor | NR | NR | 0.88 | Y/N |
| Contrast-enhancing tumor | NR | NR | 0.95 | |||||||||||||
| Tumor core | NR | NR | 0.75 | |||||||||||||
| Xue et al (2017) [ | 274 | NR | NR | NR | No | HGG and LGG | BraTS 2015 | T1w, T2w, T1w c+, and FLAIR images | An end-to-end adversarial neural network | 2D | Whole tumor | 80 | NR | 0.85 | Y/Y | |
| Contrast-enhancing tumor | 62 | NR | 0.66 | |||||||||||||
| Tumor core | 65 | NR | 0.70 | |||||||||||||
| 30 | NR | NR | 10 | No | HGG | BraTS 2012 | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | Apply a CNN in a sliding-window fashion in the 3D space | 3D | Whole tumor | NR | NR | 0.90 ± 0.09 | Y/N | |
| Contrast-enhancing tumor | NR | NR | 0.85 ± 0.09 | |||||||||||||
| Necrotic tumor core | NR | NR | 0.75 ± 0.16 | |||||||||||||
| Peritumoral edema | NR | NR | 0.80 ± 0.18 | |||||||||||||
Studies included in the meta-analysis were italicized
BraTS, Brain Tumor Image Segmentation Benchmark; CNN, convolutional neural network; DSC, dice similarity coefficient; kNN-CRF, k-nearest neighbor conditional random fields; KSVM-CRF, kernel support vector machine with rbf kernel conditional random fields; LSTM, long short-term memory; MLA, machine learning algorithms; N, no; NR, not reported; PKSVM-CRF, proposed product kernel support vector machine conditional random fields; SD, standard deviation; SK-TPCNN (+RF), small kernels two-path convolutional (+ random forests) neural network; SN, sensitivity; SP, specificity; TCIA, the Cancer Imaging Archive; TCGA, the Cancer Genome Atlas; Y, yes. *The deep learning model is based on the recently published DeepMedic architecture, which provided top scoring results on the BRATS data set [17]. **Data separated by LGG and HGG for each network available in the original paper
For more information on the multivendor BraTS dataset, see Menze et al [11]. Please note that the ground truth of BraTS 2015 was first produced by algorithms and then verified by annotators; in contrast, the ground truth of BraTS 2013 fused multiple manual annotations
Fig. 2Forest plot of the included studies that assessed the accuracy of segmentation of glioma. Legend: DSC, dice similarity coefficient; CI, confidence interval. Forest plot shows that the performance of the MLAs to segment gliomas are centered around a DSC of 0.837 with a 95% CI ranging from 0.820 to 0.855
Fig. 3Forest plot of the included studies that assessed the accuracy of segmentation of high-grade glioma. Legend: DSC, dice similarity coefficient; CI, confidence interval. Forest plot shows that the performance of the MLAs to segment HGGs are centered around a DSC of 0.834 with a 95% CI ranging from 0.802 to 0.867
Fig. 4Forest plot of the included studies that assessed the accuracy of segmentation of low-grade glioma. Legend: DSC, dice similarity coefficient; CI, confidence interval. Forest plot shows that the performance of the MLAs to segment LGGs are centered around a DSC of 0.823 with a 95% CI ranging from 0.776 to 0.870
Fig. 5Funnel plot of the included studies. Legend: DSC, dice similarity coefficient; CI, confidence interval. DSC score was displayed on the horizontal axis as the effect size; SE was plotted on the vertical axis of the funnel plot
Overview of the studies on post-operative glioma segmentation
| Training set | Test set | Reference segmentations | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| First author (year of publication) (reference) | N | Mean age (years) | M-F | N | External validation | Target condition | Dataset | MR Sequences | Summary of DLA methods | 2D vs. 3D | Subgroups | SN | SP | DSC score (± SD) | Data/code openly available? | |
| Herrmann et al (2020) [ | 30 | NR | NR | 30 | No | Brain resection cavity delineation | Original data | T1w, T2w, T1w c+, and FLAIR images | Manual segmentations made by three experienced radiation oncology experts. To improve inter-rater consistency the raters have been instructed by an experienced neuro-radiologist. | A fully convolutional densely connected architecture which builds on the idea of DenseNet was used. | 3D | NA | NR | NR | 0.83 | N/N |
| Meier et al (2016) [ | 14 | NR | NR | 14 | No | Brain volume delineation during and after therapy with neurosurgery, radiotherapy, chemotherapy, and/or anti-angiogenic therapy | Original data | T1w, T2w, T1w c+, and FLAIR images | Manual segmentations made by two raters (one experienced, one inexperienced); this table only represents the overlap between the MLA and the experienced rater | Machine learning–based framework using voxel-wise tissue classification for automated segmentation | 2D | Non- enhancing T2 hyperintense tissue | NR | NR | 0.673 | N/N |
| Contrast-enhancing T2 hyperintense tissue | NR | NR | 0.183 | |||||||||||||
| Zeng et al (2016) [ | 218 | NR | NR | 191 | No | Segmenting post-operative scans | BraTS 2016 and original data | T1w, T2w, T1w c+, and FLAIR images | BraTS segmentations | A hybrid generative-discriminative model was used. Firstly, a generative model based on a joint segmentation-registration framework was used to segment the brain scans into cancerous and healthy tissues. Secondly, a gradient boosting classification scheme was used to refine tumor segmentation based on information from multiple patients. | 3D | Post-operative HGG – Whole tumor | NR | NR | 0.72 | N/N |
| Post-operative HGG – contrast-enhancing tumor | NR | NR | 0.49 | |||||||||||||
| Post-operative HGG – tumor core | NR | NR | 0.57 | |||||||||||||
| Tang et al (2020) [ | 59 | 41.2 ± 12.6 | 32-27 | 15 | No | Post-operative glioma segmentation in CT images | Original data | T1w, T2w, T1w c+, and FLAIR images | Manual segmentations made by one experienced radiation oncology expert | DFFM is a multi-sequence MRI–guided CNN that iteratively learned the deep features from CT images and multi-sequence MR images simultaneously by utilizing a multi-channel CNN architecture, and then combined these two deep features together to produce the segmentation result. The whole network was optimized together via a standard back-propagation. | 3D | NA | NR | NR | 0.818 | N/N |
BraTS, Brain Tumor Image Segmentation Benchmark; CNN, convolutional neural network; DSC, dice similarity coefficient; MLA, machine learning algorithms; N, no; NA, not applicable; NR, not reported; SD, standard deviation; SN, sensitivity; SP, specificity; Y, yes
For more information on the multivendor BraTS dataset, see Menze et al [11]. Please note that the ground truth of BraTS 2015 was first produced by algorithms and then verified by annotators; in contrast, the ground truth of BraTS 2013 fused multiple manual annotations