| Literature DB >> 28577131 |
Zeynettin Akkus1, Alfiia Galimzianova2, Assaf Hoogi2, Daniel L Rubin2, Bradley J Erickson3.
Abstract
Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.Entities:
Keywords: Brain lesion segmentation; Convolutional neural network; Deep learning; Quantitative brain MRI
Mesh:
Year: 2017 PMID: 28577131 PMCID: PMC5537095 DOI: 10.1007/s10278-017-9983-4
Source DB: PubMed Journal: J Digit Imaging ISSN: 0897-1889 Impact factor: 4.056
Fig. 1A schematic representation of a convolutional neural network (CNN) training process
A summary of popular quantitative measures of brain MRI segmentation quality and their mathematical formulation with respect to the number of false positives (FP), true positives (TP), and false negatives (FN) at voxel level and lesion level (FPL, TPL, and FNL, respectively). ∂S and ∂R are the sets of lesion border pixels/voxels for the tested and the reference segmentations, and dm(v, V) is the minimum of the Euclidean distances between a voxel v and voxels in a set V.
| Metric of segmentation quality | Mathematical description |
|---|---|
| True positive rate, TPR |
|
| Positive predictive rate, PPV |
|
| Dice similarity coefficient, DSC |
|
| Volume difference rate, VDR |
|
| Hausdorff distance | HD = max {supr ∈ ∂Rdm(s, r), sups ∈ ∂Sdm(r, s)} |
| Average symmetric surface distance, ASSD |
|
| Lesion-wise true positive rate, LTPR |
|
| Lesion-wise positive predictive value, LPPV |
|
Fig. 2Schematic illustration of a patch-wise CNN architecture for brain tumor segmentation task
Fig. 3Schematic illustration of a semantic-wise CNN architecture for brain tumor segmentation task
Fig. 4Schematic illustration of a cascaded CNN architecture for brain tumor segmentation task, where the output of the first network (CNN 1) is used in addition to image data for a refined input to the second network (CNN 2), which provides final segmentation
Deep learning approaches for brain structure segmentation
| Authors | CNN Style | Dim | Accuracy | Data |
|---|---|---|---|---|
| Zhang et al. 2015 [ | Patch-wise | 2D | DSC 83.5% (CSF), 85.2% (GM), 86.4% (WM) | Private data (10 healthy infants) |
| Nie et al. 2016 [ | Semantic-wise | 2D | DSC 85.5% (CSF), 87.3% (GM), 88.7% (WM) | Private data (10 healthy infants) |
| de Brebisson et al. 2015 [ | Patch-wise | 2D/3D | Overall DSC 72.5% ∓ 16.3% | MICCAI 2012-multi-atlas labeling |
| Moeskops et al. 2016 [ | Patch-wise | 2D/3D | Overall DSC 73.53% | MICCAI 2012-multi-atlas labeling |
| Bao et al. 2016 [ | Patch-wise | 2D | DSC 82.2%/85% | IBSR/LPBA40 |
Top ten ranking of algorithms in MRBrainS challenge (Complete list is available at: http://mrbrains13.isi.uu.nl/results.php)
| Rank | Team name | Submission name | Sequence used | Speed |
|---|---|---|---|---|
| 1 | CU_DL | 3D Deep learning; voxnet1 | T1; T1_IR; FLAIR | ~2 min |
| 2 | CU_DL2 | 3D Deep learning; voxnet2 | T1; T1_IR; FLAIR | ~2 min |
| 3 | MDGRU | Multi-dimensional gated Recurrent units | T1; T1_IR; FLAIR | ~2 min |
| 4 | PyraMiD-LSTM2 | NOCC with rounds | T1; T1-IR; FLAIR | ~2 min |
| 5 | FBI/LMB Freiburg | U-Net (3D) | T1; T1-IR; F | ~2 min |
| 6 | IDSIA | PyraMiD-LSTM | T1; T1_IR; FLAIR | ~2 min |
| 7 | STH | Hybrid ANN-based Auto-context method | T1; T1-IR; FLAIR | ~5 min |
| 8 | ISI-Neonatology | Multi-stage voxel classification | T1 | ~1.5 h |
| 9 | UNC-IDEA | LINKS: Learning-based multi-source integration | T1; T1_IR; FLAIR | ~3 min |
| 10 | MNAB2 | Random forests | T1; T1_IR; FLAIR | ~25 min |
Deep learning approaches for quantification of brain lesions
| Authors | Aim | CNN Style | Dim | Accuracy | Dataset |
|---|---|---|---|---|---|
| Havaei et al. 2016 [ | Tumor segmentation | Patch-wise | 2D | DSC 0.88 (complete), 0.79 (core), 0.73 (enhancing) | BRATS-2013 |
| S. Pereira et al. 2016 [ | Tumor segmentation | Patch-wise | 2D | DSC 0.88 (complete), 0.83 (core), 0.77 (enhancing) | BRATS-2013 |
| Zhao and Jia 2015 [ | Tumor segmentation | Patch-wise | 2D | Overall accuracy 0.81 | BRATS-2013 |
| Kamnitsas et al. 2016 [ | Tumor segmentation | Patch-wise | 3D | DSC 0.9 (complete), 0.75 (core), 0.73 (enhancing) | BRATS-2015 |
| Dvorak et al. 2015 [ | Tumor segmentation | Patch-wise | 2D | DSC 0.83 (complete), 0.75 (core), 0.77 (enhancing) | BRATS-2014 |
| Brosch et al. 2016 [ | MS segmentation | Semantic-wise | 3D | DSC 0.68 (ISBI); DSC 0.84 (MICCAI) | MICCAI 2008-ISBI 2015 |
| Dou et al. 2016 [ | Cerebral microbleed detection | Cascaded (semantic/patch-wise) | 3D | Sensitivity 98.29% | Private data (320 subjects) |
| Maier et al. 2015 [ | Ischemic stroke detection | Patch-wise | 2D | DSC 0.67 ± 0.18; HD 29.64 ± 24.6 | Private data (37 subjects) |
| Akkus et al. 2016 [ | Tumor genomic prediction | Patch-wise | 2D | 0.93 (sensitivity), 0.82 (specificity), and 0.88 (accuracy) | Private data (159 subjects) |