| Literature DB >> 35204504 |
Zakarya Farea Shaaf1, Muhammad Mahadi Abdul Jamil1, Radzi Ambar1, Ahmed Abdu Alattab2,3, Anwar Ali Yahya3,4, Yousef Asiri4.
Abstract
BACKGROUND: Left ventricle (LV) segmentation using a cardiac magnetic resonance imaging (MRI) dataset is critical for evaluating global and regional cardiac functions and diagnosing cardiovascular diseases. LV clinical metrics such as LV volume, LV mass and ejection fraction (EF) are frequently extracted based on the LV segmentation from short-axis MRI images. Manual segmentation to assess such functions is tedious and time-consuming for medical experts to diagnose cardiac pathologies. Therefore, a fully automated LV segmentation technique is required to assist medical experts in working more efficiently.Entities:
Keywords: cardiac short-axis MRI; fully convolutional network; left ventricle segmentation; medical image processing; pixel weights balancing
Year: 2022 PMID: 35204504 PMCID: PMC8871002 DOI: 10.3390/diagnostics12020414
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1LV short-axis MRI and corresponding ground truth (labels).
Current studies in LV segmentation from cardiac MRI using deep learning algorithms.
| Author/Year | Dataset | Subjects No. | Data Preparation | Deep Learning Model |
|---|---|---|---|---|
| Cui et al. [ | LVSC | 200 |
Cropping using multi-scale methods Pixel normalization | Attention U-net architecture |
| Tan et al. [ | LVSC | 200 |
Resampling pixels using linear interpolation Cropping Pixel normalization Augmentation (during training) | CNR |
| Tran et al. [ | SCD, LVSC and RVSC | 45, 200 and 48 |
Cropping using multi-resolution approach Augmentation | FCN |
| Khend et al. [ | ACDC-2017, LVSC and Kaggle | 150, 200 and 500 |
Cropping Augmentation | FCN DenseNet |
| Wang et al. [ | CAP | 450 |
Cropping Augmentation Pixel normalization | FCN |
| Wu et al. [ | SCD | 45 |
Image filtering Cropping/downsampling | CNN + U-Net |
| Wu et al. [ | SCD | 45 |
Augmentation | GAN |
| Dong et al. [ | MICCAI 2019 | 56 |
Pixel normalization | Parallel CNNs |
| Du et al. [ | 2900 collected images | 156 |
Cropping Normalization | Multi-task CNR + RNN |
| Abdeltawab et al. [ | ACDC-2017 | 150 |
Cropping | Two FCNs |
Figure 2Diagram of the proposed model.
Figure 3MR images’ visualization from EMIDEC dataset.
Figure 4Pixels of image labeled by usual conversion (left) and conversion by XMedCon (right).
Figure 5Architecture layers of the proposed FCN.
Figure 6Schematic architecture of the U-Net model.
Analyzing layers of the proposed FCN.
| Layer Number | Layer Type | Kernel Size | Learnable |
|---|---|---|---|
| 1 | Image input | 256 × 192 × 1 | - |
| 2 | Convolution | 256 × 192 × 16 | Weights 3 × 3 × 1 × 16 |
| 3 | Batch normalization | 256 × 192 × 16 | Offset 1 × 1 × 16 |
| 4 | ReLU | 256 × 192 × 16 | - |
| 5 | Max pooling | 128 × 96 × 16 | - |
| 6 | Convolution | 128 × 96 × 32 | Weights 3 × 3 × 16 × 32 |
| 7 | Batch normalization | 128 × 96 × 32 | Offset 1 × 1 × 32 |
| 8 | ReLU | 128 × 96 × 32 | - |
| 9 | Convolution | 128 × 96 × 64 | Weights 3 × 3 × 32 × 64 |
| 10 | Batch normalization | 128 × 96 × 64 | Offset 1 × 1 × 64 |
| 11 | ReLU | 128 × 96 × 64 | - |
| 12 | Transpose convolutional layer | 128 × 96 × 16 | Weights 4 × 4 × 16 × 64 |
| 13 | Convolution | 256 × 192 × 2 | Offset 3 × 3 × 16 × 2 |
| 14 | Softmax | 256 × 192 × 2 | - |
| 15 | Pixel classification layer | - | - |
Optimization algorithms’ performance in the trained network at mini-batch size 4.
| Learning Rate | Epochs | SGDM % | ADAM % | RMSProp % |
|---|---|---|---|---|
| 0.01 | 30 | 74.67 | 60.18 | 50.25 |
| 50 | 76.41 | 54.39 | 54.85 | |
| 100 | 82.45 | 51.34 | 65.11 | |
| 150 | 76.81 | 60.69 | 47.22 | |
| 0.001 | 30 | 70.68 | 78.91 | 81.76 |
| 50 | 74.18 | 81.27 | 83.82 | |
| 100 | 77.27 | 87.14 | 85.65 | |
| 150 | 76.92 | 91.18 | 89.20 |
Optimization algorithms’ performance in the trained network at mini-batch size 8.
| Learning Rate | Epochs | SGDM % | ADAM % | RMSProp % |
|---|---|---|---|---|
| 0.001 | 30 | 60.63 | 79.85 | 78.00 |
| 50 | 69.00 | 81.17 | 81.69 | |
| 100 | 70.65 | 87.60 | 81.96 | |
| 150 | 60.37 | 90.42 | 87.44 |
Figure 7The mini-batch accuracy and mini-batch loss when training the proposed network.
Comparison of evaluation metrics between trained models and the proposed FCN model (√ represents conversion by XMedCon and × represents conversion by coding).
| Model | Conversion by XMedCon | Jaccard Index | Sensitivity | Specificity | PPV | NPV | DSC |
|---|---|---|---|---|---|---|---|
| U-Net + sgdm | √ | 0.84 | 0.86 | 0.98 | 0.98 | 0.89 | 0.91 |
| U-Net | × | 0.60 | 0.78 | 0.85 | 0.72 | 0.89 | 0.74 |
| U-Net + adam | √ | 0.84 | 0.93 | 0.91 | 0.89 | 0.94 | 0.91 |
| The proposed FCN | √ | 0.87 | 0.98 | 0.94 | 0.89 | 0.99 | 0.93 |
Performance results of the trained models (√ represents conversion by XMedCon and × represents conversion by coding).
| Model | Conversion by XMedCon | Global Accuracy | Mean Accuracy | Mean IoU | Weighted IoU | Mean BF-Score |
|---|---|---|---|---|---|---|
| U-Net + adam | √ | 0.93 | 0.92 | 0.86 | 0.86 | 0.89 |
| U-Net | × | 0.83 | 0.82 | 0.69 | 0.71 | 0.67 |
| U-Net + sgdm | √ | 0.92 | 0.92 | 0.85 | 0.85 | 0.85 |
| Our FCN | √ | 0.95 | 0.96 | 0.90 | 0.91 | 0.89 |
Figure 8Pixels of classes before balancing (left) and after balancing (right).
Figure 9Confusion matrices of the trained models.
Figure 10Segmentation results from comparison between U-Net models under various conditions and the proposed FCN.
Performance comparison between the proposed model and other state-of-the-art models in automatic LV segmentation.
| Method | Jaccard Index | Sensitivity | Specificity | PPV | NPV | Dice |
|---|---|---|---|---|---|---|
| Cui et al. [ | 0.75 | 0.87 | 0.92 | 0.87 | 0.93 | - |
| Tan et al. [ | 0.77 | 0.88 | 0.95 | 0.86 | 0.96 | - |
| Tran et al. [ | 0.74 | 0.83 | 0.96 | 0.86 | 0.95 | - |
| Khend et al. [ | 0.74 | 0.84 | 0.96 | 0.87 | 0.95 | 0.84 |
| Wang et al. [ | 0.70 | 0.90 | 0.99 | 0.77 | 0.99 | 0.80 |
| The proposed FCN | 0.87 | 0.98 | 0.94 | 0.89 | 0.99 | 0.93 |