| Literature DB >> 36212702 |
Xiaofeng Liu1, Fangxu Xing1, Maureen Stone2, Jerry L Prince3, Jangwon Kim4, Georges El Fakhri1, Jonghye Woo1.
Abstract
Multimodal representation learning using visual movements from cine magnetic resonance imaging (MRI) and their acoustics has shown great potential to learn shared representation and to predict one modality from another. Here, we propose a new synthesis framework to translate from cine MRI sequences to spectrograms with a limited dataset size. Our framework hinges on a novel fully convolutional heterogeneous translator, with a 3D CNN encoder for efficient sequence encoding and a 2D transpose convolution decoder. In addition, a pairwise correlation of the samples with the same speech word is utilized with a latent space representation disentanglement scheme. Furthermore, an adversarial training approach with generative adversarial networks is incorporated to provide enhanced realism on our generated spectrograms. Our experimental results, carried out with a total of 63 cine MRI sequences alongside speech acoustics, show that our framework improves synthesis accuracy, compared with competing methods. Our framework thereby has shown the potential to aid in better understanding the relationship between the two modalities.Entities:
Keywords: Encoder and Decoder; GAN; Magnetic Resonance Imaging; Video to Spectrogram Synthesis
Year: 2022 PMID: 36212702 PMCID: PMC9544268 DOI: 10.1109/icassp43922.2022.9746381
Source DB: PubMed Journal: Proc IEEE Int Conf Acoust Speech Signal Process ISSN: 1520-6149