| Literature DB >> 35448211 |
Abstract
In-scanner head motion often leads to degradation in MRI scans and is a major source of error in diagnosing brain abnormalities. Researchers have explored various approaches, including blind and nonblind deconvolutions, to correct the motion artifacts in MRI scans. Inspired by the recent success of deep learning models in medical image analysis, we investigate the efficacy of employing generative adversarial networks (GANs) to address motion blurs in brain MRI scans. We cast the problem as a blind deconvolution task where a neural network is trained to guess a blurring kernel that produced the observed corruption. Specifically, our study explores a new approach under the sparse coding paradigm where every ground truth corrupting kernel is assumed to be a "combination" of a relatively small universe of "basis" kernels. This assumption is based on the intuition that, on small distance scales, patients' moves follow simple curves and that complex motions can be obtained by combining a number of simple ones. We show that, with a suitably dense basis, a neural network can effectively guess the degrading kernel and reverse some of the damage in the motion-affected real-world scans. To this end, we generated 10,000 continuous and curvilinear kernels in random positions and directions that are likely to uniformly populate the space of corrupting kernels in real-world scans. We further generated a large dataset of 225,000 pairs of sharp and blurred MR images to facilitate training effective deep learning models. Our experimental results demonstrate the viability of the proposed approach evaluated using synthetic and real-world MRI scans. Our study further suggests there is merit in exploring separate models for the sagittal, axial, and coronal planes.Entities:
Keywords: MRI; deep learning; generative adversarial network (GAN); motion blur
Year: 2022 PMID: 35448211 PMCID: PMC9027264 DOI: 10.3390/jimaging8040084
Source DB: PubMed Journal: J Imaging ISSN: 2313-433X
Figure 1Architecture of MC-GAN Model. (a) Overall GAN structure. (b) MC-GAN generator. (c) MC-GAN discriminator.
Figure 2Sample Synthetic Images. Column 1 presents the original sharp images. Columns 2–7 demonstrate generated motion-affected images with kernel length = 5, 7, 9, 11, 13, 15, respectively. The respective convolutional kernel is shown on top of each synthetic image. Images better viewed when zoomed in.
Figure 3Alignment of Input and Target Images Using Matching Landmarks. (Left): image with synthetic blur after applying a random kernel. (Right): target image.
Quantitative Evaluation on Synthetic Data Across Different Degradation Levels.
| PSNR Level | Model | Pixel-Wise RMSE | PSNR (dB) | ||||
|---|---|---|---|---|---|---|---|
| Degraded vs. Target | Corrected vs. Target | Reduction (%) | Degraded vs. Target | Corrected vs. Target | Gain | ||
| <17 | MC-GAN (x) | 0.162 (0.022) | 0.115 (0.034) |
| 15.85 (1.04) | 19.18 (2.50) |
|
| MC-GAN (y) | 0.161 (0.025) | 0.097 (0.035) |
| 15.93 (1.06) | 20.82 (2.95) |
| |
| MC-GAN (z) | 0.167 (0.028) | 0.101 (0.045) |
| 15.66 (1.22) | 20.60 (3.30) |
| |
| MC-GAN (xyz) | 0.163 (0.024) | 0.110 (0.035) |
| 15.81 (1.10) | 19.56 (2.59) |
| |
| x-direction | 0.162 (0.022) | 0.120 (0.032) | 26.43% | 15.85 (1.04) | 18.75 (2.26) | 2.90 | |
| y-direction | 0.161 (0.025) | 0.097 (0.031) | 39.58% | 15.93 (1.06) | 20.61 (2.47) | 4.67 | |
| z-direction | 0.167 (0.028) | 0.102 (0.039) | 38.97% | 15.66 (1.22) | 20.33 (2.73) | 4.67 | |
|
| MC-GAN (x) | 0.133 (0.004) | 0.097 (0.023) |
| 17.53 (0.27) | 20.53 (2.02) |
|
| MC-GAN (y) | 0.132 (0.005) | 0.086 (0.025) |
| 17.57 (0.30) | 21.72 (2.50) |
| |
| MC-GAN (z) | 0.132 (0.004) | 0.090 (0.028) |
| 17.56 (0.29) | 21.17 (2.68) |
| |
| MC-GAN (xyz) | 0.133 (0.004) | 0.095 (0.021) |
| 17.55 (0.28) | 20.67 (1.96) |
| |
| x-direction | 0.133 (0.004) | 0.100 (0.021) | 24.45% | 17.53 (0.27) | 20.14 (1.74) | 2.61 | |
| y-direction | 0.132 (0.005) | 0.089 (0.021) | 32.37% | 17.57 (0.30) | 21.20 (2.01) | 3.62 | |
| z-direction | 0.132 (0.004) | 0.090 (0.022) | 30.74% | 17.56 (0.29) | 21.12 (2.05) | 3.55 | |
|
| MC-GAN (x) | 0.120 (0.004) | 0.085 (0.019) |
| 18.45 (0.28) | 21.57 (1.89) |
|
| MC-GAN (y) | 0.118 (0.004) | 0.08 (0.022) |
| 18.54 (0.28) | 22.3 (2.34) |
| |
| MC-GAN (z) | 0.119 (0.004) | 0.079 (0.023) |
| 18.52 (0.28) | 22.36 (2.45) |
| |
| MC-GAN (xyz) | 0.119 (0.004) | 0.085 (0.019) |
| 18.50 (0.29) | 21.60 (1.92) |
| |
| x-direction | 0.120 (0.004) | 0.090 (0.017) | 24.77% | 18.45 (0.28) | 21.07 (1.62) | 2.62 | |
| y-direction | 0.118 (0.004) | 0.084 (0.019) | 29.04% | 18.54 (0.28) | 21.73 (1.96) | 3.20 | |
| z-direction | 0.119 (0.004) | 0.082 (0.020) | 31.30% | 18.52 (0.29) | 22.02 (2.04) | 3.50 | |
|
| MC-GAN (x) | 0.107 (0.003) | 0.077 (0.016) |
| 19.43 (0.28) | 22.45 (1.71) |
|
| MC-GAN (y) | 0.106 (0.004) | 0.071 (0.018) |
| 19.48 (0.29) | 23.26 (2.14) |
| |
| MC-GAN (z) | 0.106 (0.004) | 0.071 (0.019) |
| 19.48 (0.29) | 23.31 (2.26) |
| |
| MC-GAN (xyz) | 0.106 (0.004) | 0.076 (0.016) |
| 19.47 (0.29) | 22.57 (1.76) |
| |
| x-direction | 0.107 (0.003) | 0.083 (0.015) | 22.79% | 19.43 (0.28) | 21.80 (1.49) | 2.37 | |
| y-direction | 0.106 (0.004) | 0.075 (0.016) | 29.66% | 19.48 (0.29) | 22.72 (1.80) | 3.24 | |
| z-direction | 0.106 (0.004) | 0.073 (0.015) | 31.05% | 19.48 (0.29) | 22.88 (1.73) | 3.40 | |
| >20 | MC-GAN (x) | 0.089 (0.009) | 0.067 (0.012) |
| 21.09 (0.97) | 23.61 (1.53) |
|
| MC-GAN (y) | 0.088 (0.010) | 0.061 (0.014) |
| 21.19 (1.13) | 24.50 (1.90) |
| |
| MC-GAN (z) | 0.089 (0.010) | 0.064 (0.015) |
| 21.11 (1.06) | 24.06 (1.89) |
| |
| MC-GAN (xyz) | 0.088 (0.010) | 0.066 (0.013) |
| 21.14 (1.08) | 23.76 (1.69) |
| |
| x-direction | 0.089 (0.009) | 0.071 (0.012) | 20.44% | 21.09 (0.97) | 23.14 (1.42) | 2.05 | |
| y-direction | 0.088 (0.010) | 0.064 (0.013) | 27.33% | 21.19 (1.13) | 24.06 (1.69) | 2.87 | |
| z-direction | 0.089 (0.010) | 0.067 (0.014) | 24.22% | 21.11 (1.06) | 23.62 (1.71) | 2.52 | |
The “Degraded vs. Target” columns present the discrepancies (RMSE) and similarities (PSNR) between blurred scans and their artifact-free counterparts in each category. The “Corrected vs. Target” columns show the discrepancies/similarities between model-corrected images and the targets. The values were computed after first scaling the images to the range [0, 255]. The numbers in parentheses are standard deviations. Bold numbers show each model’s overall RMSE reduction and PSNR gain. The x-, y-, z- directions present the breakdown performance of MC-GAN(xyz) along the sagittal, axial, and coronal planes, respectively.
Figure 4Visual Assessment of MC-GAN on Reducing Synthetic Motion Blurs. Left column: simulated motion blurs using random kernels (Section 4.2.2). Middle column: model-corrected output. Right column: ground truth images. From top to bottom rows are images from sagittal, coronal, and axial planes, respectively.
Quantitative Evaluation on Real-world ABIDE data.
| Models | PIQE | ||
|---|---|---|---|
| Degraded | Corrected | Reduction (%) | |
| MC-GAN (x) | 9.09 (3.77) | 7.98 (5.23) |
|
| MC-GAN (y) | 12.17 (6.62) | 9.01 (7.52) |
|
| MC-GAN (z) | 12.45 (10.65) | 6.86 (5.05) |
|
| MC-GAN (xyz) | 11.24 (7.71) | 9.11 (7.00) |
|
| x-direction | 9.09 (3.77) | 8.38 (5.65) | 7.84% |
| y-direction | 12.17 (6.62) | 9.75 (7.13) | 19.92% |
| z-direction | 12.45 (10.65) | 9.19 (7.14) | 26.18% |
The “Degraded” column measures the PIQE between blurred scans and their artifact-free counterparts. The “Corrected” column measures the PIQE between model-corrected images and the targets. The x-, y-, z- directions present the breakdown performance of MC-GAN(xyz) along the sagittal, axial, and coronal planes, respectively. The numbers in parentheses are standard deviations. Bold numbers indicate each model’s PIQE reduction. The x-, y-, z- directions present the breakdown performance of MC-GAN(xyz) along the sagittal, axial, and coronal planes, respectively.
Figure 5Visual Assessment of MC-GAN on Real-world Motion-affected Images. Images under the “Model Input” columns are original MR images; model-corrected output is displayed to the right. The two columns on the right show the zoomed-in regions indicated by the red boxes. Samples are selected from the sagittal (a), coronal (b,d), and axial (c) directions.