| Literature DB >> 35340900 |
Jayendra M Bhalodiya1,1, Sarah N Lim Choi Keung1, Theodoros N Arvanitis1.
Abstract
Background: Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose: To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation.Entities:
Keywords: Brain tumour; artificial intelligence; brain; magnetic resonance imaging; segmentation; systematic review
Year: 2022 PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122
Source DB: PubMed Journal: Digit Health ISSN: 2055-2076
Figure 1.PRISMA diagram. PRISMA diagram of the systematic review of brain tumour segmentation methods.
Figure 2.A number of articles (2015–2020). The bar plot represents the number of articles published over the review period (2015–2020), and pie charts depict published articles in each category of technical method in each corresponding year. Total articles = 223 refers to the articles included for the synthesis.
Deep architectures, and their extensions used in tumour segmentation.
| Deep architecture | Associated publications |
|---|---|
| CNN |
[ |
| VGG |
[ |
| DeepMedic |
[ |
| U-Net |
[ |
| Autoencoder |
[ |
| GAN |
[ |
| W-net and cascade of W-net E-net and T-net |
[ |
| SENet |
|
| Multiresolution neural network |
|
| HED |
|
| Multi-level upsampling network |
|
| V-net |
[ |
| ResNet |
[ |
| Hourglass network |
|
| MvNet |
|
| DeepSCAN |
[ |
| Densely connected |
|
| Inception |
[ |
| Ensemble net |
|
| PixelNet |
[ |
| ContextNet |
|
| Dense neural network |
[ |
| MC-Net |
|
| OM-Net |
|
| ConvNet |
[ |
| WRN-PPNet |
[ |
| Deep convolutional network |
[ |
| Neuromorphic neural network |
|
| DeepLabv3 + |
|
| Recurrent neural network |
|
| DFKZ |
|
| EMMA |
|
| SegNet |
|
| ResNeXt |
|
| DenseAFPNet |
|
| DMFNet |
[ |
| P-Net |
|
| MFNet |
|
| HNF-Net |
|
| Deep neural network |
[ |
| D2C2N |
|
| DeepSeg |
|
CNN: convolutional neural network; VGG: visual geometry group; GAN: generative adversarial network; SENet: squeeze, and excitation network; HED: holistically-nested edge detection; ResNet: residual network; MvNet: multi-view network, WRN-PPNet: wide residual network, and pyramid pool network, DFKZ: German cancer research centre; EMMA: ensembles of multiple models, and architecture; DenseAFPNet: Dense atrous feature pyramid network; DMFNet: dilated multi-fibre network; MFNet: multi-direction fusion network; HNF-Net: high-resolution, and non-local feature network; D2C2N: dilated densely connected convolutional network.
Figure 3.Comparison of segmentation results. Performance score evaluation, in segmenting WT, TC and ET, by considering all 223 articles.
Figure 4.Data samples in deep learning studies. Summary of training, validation and test data samples reported in deep learning methods. Median of training, validation and test data samples are 285, 54 and 110, respectively.
| Research in context |
|---|
Articles of widely used deep architectures and their technical details.
| Deep architecture | Article | Technical details |
|---|---|---|
| U-Net |
| 3D U-Net, which synthesises information at each scale by combining local and contextual information |
|
| Modified 3D U-Net with better gradient flow | |
|
| Modified U-Net with an upsampled component, which is based on the nearest neighbour algorithm and elastic transformation | |
|
| 2D U-Net network using a biophysics-based domain adaptation method with a generative adversarial model, which synthesises known ground truth data | |
|
| Modified U-Net with up skip connection, inception module and efficient cascade training | |
|
| 3D U-Net with DenseNet, which was pre-trained on ImageNet | |
|
| U-Net network | |
|
| 3D U-Net with test time augmentation | |
|
| U-Net with dice loss function to tackle class imbalance problem, and extensive data augmentation to prevent over-fitting | |
|
| Ensemble of 3D U-Nets with different hyperparameters | |
|
| U-Net training, which is followed by Bit-plane method output | |
|
| U-Net with double convolutional layers, inception modules and dense modules | |
|
| Modified U-Net, which is addressing class imbalance problem, with weighted cross-entropy and generalised dice loss function | |
|
| Deep learning radiomics algorithm model with 3D patch-based U-Net | |
|
| U-net with encoder adaption block and densely connected fusion blocks in the decoder | |
|
| An ensemble of two 3D U-Nets in which skip connections are used as a summation of signals in the up-sampling part of one network, and the other network uses concatenated skip connections and stridden convolutions | |
|
| Inception modules with U-Net | |
|
| Multi-scale images as input to the 3D U-Net and including 3D atrous spatial pyramid pooling layer to boost the performance of the network | |
|
| U-Net training improvement using large patch sizes, region-based training, additional data and a combination of loss functions | |
|
| U-Net with separable 3D convolution by dividing each 3D convolution block into three parallel branches | |
|
| Two 3D U-Nets in which the first detect the tumour, and the second one segments multiple regions of the tumour | |
|
| U-Net | |
|
| The tree structure of 3D U-Nets such that the first node of the tree predicts oedema, and then feed the output to the subsequent nodes to detect tumorous subregions of oedema | |
|
| U-Net in an ensemble of networks | |
| VGG |
| 3D fully connected network which is based on VGG with skip connections that combine coarse high scale information with fine low scale information |
|
| 3D convolutions, except max pool layers, VGG-based, an ensemble of multiple architectures | |
|
| CNN, which is based on VGG-16 and initially trained on ImageNet weights, and then fine-tuned with MICCAI data, relies on a pseudo-3D method which enables 3D segmentation from 2D colour-like images, and ultimately gives faster segmentation | |
| DeepMedic |
| Two path network based on DeepMedic network which allows gathering low, and high resolution features together |
|
| Multi-path CNN, which is inspired by DeepMedic which includes large, and small patches | |
|
| DeepMedic network | |
|
| A computer-aided diagnosis that combined DeepMedic, and radiomics features such as first-order features, shape features and texture features | |
|
| DeepMedic with additional residual connections | |
|
| An ensemble of two DeepMedic architectures | |
|
| A DeepMedic-based network is followed by a fully connected network to remove false positives | |
| Autoencoder |
| Encoder, and decoder based 3D architecture that includes a variational auto-encoder branch to reconstruct the input image, which could be used as a regulariser for the shared decoder |
|
| Stacked denoising auto-encoder | |
|
| Stacked denoising auto-encoder | |
| GAN |
| Discriminator and generator based conditional generative adversarial network |
|
| Adversarial network, discriminator is trained along with a generator to produce synthetic results, synthetic labels and ground truth are discriminated by discriminator, discriminator output is fed back to generator for improved segmentation accuracy | |
|
| Generative adversarial network with a coarse-to-fine generator to generate generic augmented data |
VGG: visual geometry group; GAN: generative adversarial network; CNN: convolutional neural network; MICCAI: Medical Image Computing, and Computer-Assisted Interventions.