PURPOSE: Recently, machine learning has outperformed established tools for automated segmentation in medical imaging. However, segmentation of cardiac chambers still proves challenging due to the variety of contrast agent injection protocols used in clinical practice, inducing disparities of contrast between cavities. Hence, training a generalist network requires large training datasets representative of these protocols. Furthermore, segmentation on unenhanced CT scans is further hindered by the challenge of obtaining ground truths from these images. Newly available spectral CT scanners allow innovative image reconstructions such as virtual non-contrast (VNC) imaging, mimicking non-contrasted conventional CT studies from a contrasted scan. Recent publications have demonstrated that networks can be trained using VNC to segment contrasted and unenhanced conventional CT scans to reduce annotated data requirements and the need for annotations on unenhanced scans. We propose an extensive evaluation of this statement. METHOD: We undertake multiple trainings of a 3D multi-label heart segmentation network with (HU-VNC) and without (HUonly) VNC as augmentation, using decreasing training dataset sizes (114, 76, 57, 38, 29, 19 patients). At each step, both networks are tested on a multi-vendor, multi-centric dataset of 122 patients, including different protocols: pulmonary embolism (PE), chest-abdomen-pelvis (CAP), heart CT angiography (CTA) and true non-contrast scans (TNC). An in-depth comparison of resulting Dice coefficients and distance metrics is performed for the networks trained on the largest dataset. RESULTS: HU-VNC-trained on 57 patients significantly outperforms HUonly trained on 114 regarding CAP and TNC scans (mean Dice coefficients of 0.881/0.835 and 0.882/0.416, respectively). When trained on the largest dataset, significant improvements in all labels are noted for TNC and CAP scans (mean Dice coefficient of 0.882/0.416 and 0.891/0.835, respectively). CONCLUSION: Adding VNC images as training augmentation allows the network to perform on unenhanced scans and improves segmentations on other imaging protocols, while using a reduced training dataset.
PURPOSE: Recently, machine learning has outperformed established tools for automated segmentation in medical imaging. However, segmentation of cardiac chambers still proves challenging due to the variety of contrast agent injection protocols used in clinical practice, inducing disparities of contrast between cavities. Hence, training a generalist network requires large training datasets representative of these protocols. Furthermore, segmentation on unenhanced CT scans is further hindered by the challenge of obtaining ground truths from these images. Newly available spectral CT scanners allow innovative image reconstructions such as virtual non-contrast (VNC) imaging, mimicking non-contrasted conventional CT studies from a contrasted scan. Recent publications have demonstrated that networks can be trained using VNC to segment contrasted and unenhanced conventional CT scans to reduce annotated data requirements and the need for annotations on unenhanced scans. We propose an extensive evaluation of this statement. METHOD: We undertake multiple trainings of a 3D multi-label heart segmentation network with (HU-VNC) and without (HUonly) VNC as augmentation, using decreasing training dataset sizes (114, 76, 57, 38, 29, 19 patients). At each step, both networks are tested on a multi-vendor, multi-centric dataset of 122 patients, including different protocols: pulmonary embolism (PE), chest-abdomen-pelvis (CAP), heart CT angiography (CTA) and true non-contrast scans (TNC). An in-depth comparison of resulting Dice coefficients and distance metrics is performed for the networks trained on the largest dataset. RESULTS: HU-VNC-trained on 57 patients significantly outperforms HUonly trained on 114 regarding CAP and TNC scans (mean Dice coefficients of 0.881/0.835 and 0.882/0.416, respectively). When trained on the largest dataset, significant improvements in all labels are noted for TNC and CAP scans (mean Dice coefficient of 0.882/0.416 and 0.891/0.835, respectively). CONCLUSION: Adding VNC images as training augmentation allows the network to perform on unenhanced scans and improves segmentations on other imaging protocols, while using a reduced training dataset.
Authors: Yiting Xie; Jennifer Padgett; Alberto M Biancardi; Anthony P Reeves Journal: Int J Comput Assist Radiol Surg Date: 2013-07-23 Impact factor: 2.924
Authors: Gurpreet Singh; Subhi J Al'Aref; Marly Van Assen; Timothy Suyong Kim; Alexander van Rosendael; Kranthi K Kolli; Aeshita Dwivedi; Gabriel Maliakal; Mohit Pandey; Jing Wang; Virginie Do; Manasa Gummalla; Carlo N De Cecco; James K Min Journal: J Cardiovasc Comput Tomogr Date: 2018-04-30
Authors: José Carlos González Sánchez; Maria Magnusson; Michael Sandborg; Åsa Carlsson Tedgren; Alexandr Malusek Journal: Phys Med Date: 2020-01-06 Impact factor: 2.685
Authors: Rahil Shahzad; Daniel Bos; Ricardo P J Budde; Karlijn Pellikaan; Wiro J Niessen; Aad van der Lugt; Theo van Walsum Journal: Phys Med Biol Date: 2017-03-01 Impact factor: 3.609
Authors: Steffen Bruns; Jelmer M Wolterink; Richard A P Takx; Robbert W van Hamersvelt; Dominika Suchá; Max A Viergever; Tim Leiner; Ivana Išgum Journal: Med Phys Date: 2020-08-27 Impact factor: 4.071
Authors: Zahra Sedghi Gamechi; Lidia R Bons; Marco Giordano; Daniel Bos; Ricardo P J Budde; Klaus F Kofoed; Jesper Holst Pedersen; Jolien W Roos-Hesselink; Marleen de Bruijne Journal: Eur Radiol Date: 2019-01-23 Impact factor: 5.315
Authors: Zahra Sedghi Gamechi; Andres M Arias-Lorza; Zaigham Saghir; Daniel Bos; Marleen de Bruijne Journal: Med Phys Date: 2021-10-29 Impact factor: 4.506