Literature DB >> 31521252

Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks.

Oeslle Lucena1, Roberto Souza2, Letícia Rittner3, Richard Frayne2, Roberto Lotufo3.   

Abstract

Manual annotation is considered to be the "gold standard" in medical imaging analysis. However, medical imaging datasets that include expert manual segmentation are scarce as this step is time-consuming, and therefore expensive. Moreover, single-rater manual annotation is most often used in data-driven approaches making the network biased to only that single expert. In this work, we propose a CNN for brain extraction in magnetic resonance (MR) imaging, that is fully trained with what we refer to as "silver standard" masks. Therefore, eliminating the cost associated with manual annotation. Silver standard masks are generated by forming the consensus from a set of eight, public, non-deep-learning-based brain extraction methods using the Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. Our method consists of (1) developing a dataset with "silver standard" masks as input, and implementing (2) a tri-planar method using parallel 2D U-Net-based convolutional neural networks (CNNs) (referred to as CONSNet). This term refers to our integrated approach, i.e., training with silver standard masks and using a 2D U-Net-based architecture. We conducted our analysis using three public datasets: the Calgary-Campinas-359 (CC-359), the LONI Probabilistic Brain Atlas (LPBA40), and the Open Access Series of Imaging Studies (OASIS). Five performance metrics were used in our experiments: Dice coefficient, sensitivity, specificity, Hausdorff distance, and symmetric surface-to-surface mean distance. Our results showed that we outperformed (i.e., larger Dice coefficients) the current state-of-the-art skull-stripping methods without using gold standard annotation for the CNNs training stage. CONSNet is the first deep learning approach that is fully trained using silver standard data and is, thus, more generalizable. Using these masks, we eliminate the cost of manual annotation, decreased inter-/intra-rater variability, and avoided CNN segmentation overfitting towards one specific manual annotation guideline that can occur when gold standard masks are used. Moreover, once trained, our method takes few seconds to process a typical brain image volume using modern a high-end GPU. In contrast, many of the other competitive methods have processing times in the order of minutes.
Copyright © 2019 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Convolutional neural network (CNN); Data augmentation; Silver standard masks; Skull-stripping

Year:  2019        PMID: 31521252     DOI: 10.1016/j.artmed.2019.06.008

Source DB:  PubMed          Journal:  Artif Intell Med        ISSN: 0933-3657            Impact factor:   5.326


  4 in total

1.  State-of-the-Art Traditional to the Machine- and Deep-Learning-Based Skull Stripping Techniques, Models, and Algorithms.

Authors:  Anam Fatima; Ahmad Raza Shahid; Basit Raza; Tahir Mustafa Madni; Uzair Iqbal Janjua
Journal:  J Digit Imaging       Date:  2020-12       Impact factor: 4.056

2.  Hyperconnected Openings Codified in a Max Tree Structure: An Application for Skull-Stripping in Brain MRI T1.

Authors:  Carlos Paredes-Orta; Jorge Domingo Mendiola-Santibañez; Danjela Ibrahimi; Juvenal Rodríguez-Reséndiz; Germán Díaz-Florez; Carlos Alberto Olvera-Olvera
Journal:  Sensors (Basel)       Date:  2022-02-11       Impact factor: 3.576

3.  SynthStrip: skull-stripping for any brain image.

Authors:  Andrew Hoopes; Jocelyn S Mora; Adrian V Dalca; Bruce Fischl; Malte Hoffmann
Journal:  Neuroimage       Date:  2022-07-13       Impact factor: 7.400

4.  A domain adaptation benchmark for T1-weighted brain magnetic resonance image segmentation.

Authors:  Parisa Saat; Nikita Nogovitsyn; Muhammad Yusuf Hassan; Muhammad Athar Ganaie; Roberto Souza; Hadi Hemmati
Journal:  Front Neuroinform       Date:  2022-09-23       Impact factor: 3.739

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.