Jonas Denck1,2,3, Jens Guehring4, Andreas Maier5, Eva Rothgang6. 1. Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany. jonas.denck@gmail.com. 2. Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, Weiden, Germany. jonas.denck@gmail.com. 3. Siemens Healthineers, Erlangen, Germany. jonas.denck@gmail.com. 4. Siemens Healthineers, Erlangen, Germany. 5. Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany. 6. Department of Industrial Engineering and Health, Technical University of Applied Sciences Amberg-Weiden, Weiden, Germany.
Abstract
PURPOSE: A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As MR sequence acquisition is time consuming and acquired images may be corrupted due to motion, a method to synthesize MR images with adjustable contrast properties is required. METHODS: Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time. Our approach is motivated by style transfer networks, whereas the "style" for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on. RESULTS: This enables us to synthesize MR images with adjustable image contrast. We evaluated our approach on the fastMRI dataset, a large set of publicly available MR knee images, and show that our method outperforms a benchmark pix2pix approach in the translation of non-fat-saturated MR images to fat-saturated images. Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly. CONCLUSION: Our model is the first that enables fine-tuned contrast synthesis, which can be used to synthesize missing MR-contrasts or as a data augmentation technique for AI training in MRI. It can also be used as basis for other image-to-image translation tasks within medical imaging, e.g., to enhance intermodality translation (MRI → CT) or 7 T image synthesis from 3 T MR images.
PURPOSE: A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As MR sequence acquisition is time consuming and acquired images may be corrupted due to motion, a method to synthesize MR images with adjustable contrast properties is required. METHODS: Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time. Our approach is motivated by style transfer networks, whereas the "style" for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on. RESULTS: This enables us to synthesize MR images with adjustable image contrast. We evaluated our approach on the fastMRI dataset, a large set of publicly available MR knee images, and show that our method outperforms a benchmark pix2pix approach in the translation of non-fat-saturated MR images to fat-saturated images. Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly. CONCLUSION: Our model is the first that enables fine-tuned contrast synthesis, which can be used to synthesize missing MR-contrasts or as a data augmentation technique for AI training in MRI. It can also be used as basis for other image-to-image translation tasks within medical imaging, e.g., to enhance intermodality translation (MRI → CT) or 7 T image synthesis from 3 T MR images.
Keywords:
Deep learning; Generative adversarial networks; Image synthesis; Magnetic resonance imaging
Authors: Garry E Gold; Eric Han; Jeff Stainsby; Graham Wright; Jean Brittain; Christopher Beaulieu Journal: AJR Am J Roentgenol Date: 2004-08 Impact factor: 3.959
Authors: Florian Knoll; Jure Zbontar; Anuroop Sriram; Matthew J Muckley; Mary Bruno; Aaron Defazio; Marc Parente; Krzysztof J Geras; Joe Katsnelson; Hersh Chandarana; Zizhao Zhang; Michal Drozdzalv; Adriana Romero; Michael Rabbat; Pascal Vincent; James Pinkerton; Duo Wang; Nafissa Yakubova; Erich Owens; C Lawrence Zitnick; Michael P Recht; Daniel K Sodickson; Yvonne W Lui Journal: Radiol Artif Intell Date: 2020-01-29
Authors: Akshay S Chaudhari; Marianne S Black; Susanne Eijgenraam; Wolfgang Wirth; Susanne Maschek; Bragi Sveinsson; Felix Eckstein; Edwin H G Oei; Garry E Gold; Brian A Hargreaves Journal: J Magn Reson Imaging Date: 2017-11-01 Impact factor: 4.813
Authors: Edwin J R van Beek; Christiane Kuhl; Yoshimi Anzai; Patricia Desmond; Richard L Ehman; Qiyong Gong; Garry Gold; Vikas Gulani; Margaret Hall-Craggs; Tim Leiner; C C Tschoyoson Lim; James G Pipe; Scott Reeder; Caroline Reinhold; Marion Smits; Daniel K Sodickson; Clare Tempany; H Alberto Vargas; Meiyun Wang Journal: J Magn Reson Imaging Date: 2018-08-25 Impact factor: 4.813
Authors: Jonas Denck; Wilfried Landschütz; Knud Nairz; Johannes T Heverhagen; Andreas Maier; Eva Rothgang Journal: J Digit Imaging Date: 2019-12 Impact factor: 4.056
Authors: Sydney Kaplan; Anders Perrone; Dimitrios Alexopoulos; Jeanette K Kenley; Deanna M Barch; Claudia Buss; Jed T Elison; Alice M Graham; Jeffrey J Neil; Thomas G O'Connor; Jerod M Rasmussen; Monica D Rosenberg; Cynthia E Rogers; Aristeidis Sotiras; Damien A Fair; Christopher D Smyser Journal: Neuroimage Date: 2022-03-11 Impact factor: 7.400