| Literature DB >> 36117623 |
Dapeng Cheng1,2, Chao Chen1, Mao Yanyan1,3, Panlu You1, Xingdan Huang4, Jiale Gai1, Feng Zhao1,2, Ning Mao5.
Abstract
Today's brain imaging modality migration techniques are transformed from one modality data in one domain to another. In the specific clinical diagnosis, multiple modal data can be obtained in the same scanning field, and it is more beneficial to synthesize missing modal data by using the diversity characteristics of multiple modal data. Therefore, we introduce a self-supervised learning cycle-consistent generative adversarial network (BSL-GAN) for brain imaging modality transfer. The framework constructs multi-branch input, which enables the framework to learn the diversity characteristics of multimodal data. In addition, their supervision information is mined from large-scale unsupervised data by establishing auxiliary tasks, and the network is trained by constructing supervision information, which not only ensures the similarity between the input and output of modal images, but can also learn valuable representations for downstream tasks.Entities:
Keywords: auxiliary tasks; brain imaging; generative adversarial network; multiple modal; self-supervised learning
Year: 2022 PMID: 36117623 PMCID: PMC9477095 DOI: 10.3389/fnins.2022.920981
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
Figure 1(A–C) BSL-GAN framework structure. BSL-GAN realizes the conversion between 1.5T MR images and 3T MR images.
Figure 21.5T MRIs and 3T MRIs.
Figure 3(A) There is no self-monitoring constraint in image generation; (B) Images generated under self-supervision constraints; and (C) 1.5T MRI (ground truth).
Self-supervised learning performance index and cooperative learning performance index.
|
|
|
|
| |
|---|---|---|---|---|
| MSE | 171.69 ± 20 | 80.58 ± 20 | 117.16 ± 20 | 107.68 ± 20 |
| PSNR | 25.78 ± 2 | 29.07 ± 1 | 27.44 ± 2 | 27.81 ± 2 |
| SSIM | 0.80 ± 0.03 | 0.92 ± 0.01 | 0.87 ± 0.03 | 0.91 ± 0.02 |
| FSIM | 0.87 ± 0.03 | 0.93 ± 0.02 | 0.89 ± 0.03 | 0.91 ± 0.02 |
Figure 4The blue line represents the generator loss, the yellow line represents the discriminator loss, and the green line represents the reconstructed loss.
Figure 5Task of 1.5T MRI to 3T MRI.
Comparison of self-monitoring constraint performance under different models: Comparison of scores between single-branch input and multi-branch input in task method with auxiliary network.
|
|
|
| |
|---|---|---|---|
| SSIM | 0.92 ± 0.01 | 0.89 ± 0.02 | 0.86 ± 0.02 |
| FSIM | 0.95 ± 0.02 | 0.93 ± 0.02 | 0.90 ± 0.02 |
Figure 6Images synthesized via missing modal data and the experimental results of other methods.
SSIM and FSIM scores of the proposed method are compared with Pix2Pix and StarGAN.
|
|
|
| ||||
|---|---|---|---|---|---|---|
|
|
|
|
|
|
| |
| BSL-GAN | 0.94 ± 0.01 | 0.95 ± 0.01 | 0.90 ± 0.01 | 0.92 ± 0.01 | 0.91 ± 0.01 | 0.93 ± 0.01 |
| pix2pix | 0.90 ± 0.02 | 0.92 ± 0.02 | 0.89 ± 0.02 | 0.91 ± 0.02 | 0.88 ± 0.02 | 0.91 ± 0.02 |
| StarGAN | 0.83 ± 0.03 | 0.88 ± 0.03 | 0.87 ± 0.03 | 0.91 ± 0.03 | 0.86 ± 0.03 | 0.89 ± 0.03 |