Xiaoxia Zhang1,2, Quentin Duchemin3, Kangning Liu4, Cem Gultekin5, Sebastian Flassbeck1,2, Carlos Fernandez-Granda4,5, Jakob Assländer1,2. 1. Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York City, New York, USA. 2. Center for Advanced Imaging Innovation and Research (CAI2R), Department of Radiology, New York University School of Medicine, New York City, New York, USA. 3. LAMA, Univ Gustave Eiffel, Univ Paris Est Creteil, Marne-la-Vallée, France. 4. Center for Data Science, New York University Grossman School of Medicine, New York City, New York, USA. 5. Courant Institute of Mathematical Sciences, New York University, New York City, New York, USA.
Abstract
PURPOSE: To improve the performance of neural networks for parameter estimation in quantitative MRI, in particular when the noise propagation varies throughout the space of biophysical parameters. THEORY AND METHODS: A theoretically well-founded loss function is proposed that normalizes the squared error of each estimate with respective Cramér-Rao bound (CRB)-a theoretical lower bound for the variance of an unbiased estimator. This avoids a dominance of hard-to-estimate parameters and areas in parameter space, which are often of little interest. The normalization with corresponding CRB balances the large errors of fundamentally more noisy estimates and the small errors of fundamentally less noisy estimates, allowing the network to better learn to estimate the latter. Further, proposed loss function provides an absolute evaluation metric for performance: A network has an average loss of 1 if it is a maximally efficient unbiased estimator, which can be considered the ideal performance. The performance gain with proposed loss function is demonstrated at the example of an eight-parameter magnetization transfer model that is fitted to phantom and in vivo data. RESULTS: Networks trained with proposed loss function perform close to optimal, that is, their loss converges to approximately 1, and their performance is superior to networks trained with the standard mean-squared error (MSE). The proposed loss function reduces the bias of the estimates compared to the MSE loss, and improves the match of the noise variance to the CRB. This performance gain translates to in vivo maps that align better with the literature. CONCLUSION: Normalizing the squared error with the CRB during the training of neural networks improves their performance in estimating biophysical parameters.
PURPOSE: To improve the performance of neural networks for parameter estimation in quantitative MRI, in particular when the noise propagation varies throughout the space of biophysical parameters. THEORY AND METHODS: A theoretically well-founded loss function is proposed that normalizes the squared error of each estimate with respective Cramér-Rao bound (CRB)-a theoretical lower bound for the variance of an unbiased estimator. This avoids a dominance of hard-to-estimate parameters and areas in parameter space, which are often of little interest. The normalization with corresponding CRB balances the large errors of fundamentally more noisy estimates and the small errors of fundamentally less noisy estimates, allowing the network to better learn to estimate the latter. Further, proposed loss function provides an absolute evaluation metric for performance: A network has an average loss of 1 if it is a maximally efficient unbiased estimator, which can be considered the ideal performance. The performance gain with proposed loss function is demonstrated at the example of an eight-parameter magnetization transfer model that is fitted to phantom and in vivo data. RESULTS: Networks trained with proposed loss function perform close to optimal, that is, their loss converges to approximately 1, and their performance is superior to networks trained with the standard mean-squared error (MSE). The proposed loss function reduces the bias of the estimates compared to the MSE loss, and improves the match of the noise variance to the CRB. This performance gain translates to in vivo maps that align better with the literature. CONCLUSION: Normalizing the squared error with the CRB during the training of neural networks improves their performance in estimating biophysical parameters.
Authors: Elisabeth Hoppe; Gregor Körzdörfer; Tobias Würfl; Jens Wetzl; Felix Lugauer; Josef Pfeuffer; Andreas Maier Journal: Stud Health Technol Inform Date: 2017
Authors: Dan Ma; Vikas Gulani; Nicole Seiberlich; Kecheng Liu; Jeffrey L Sunshine; Jeffrey L Duerk; Mark A Griswold Journal: Nature Date: 2013-03-14 Impact factor: 49.962