Nikhil Bhagwat1, Amadou Barry2, Erin W Dickie3, Shawn T Brown1, Gabriel A Devenyi4,5, Koji Hatano1, Elizabeth DuPre1, Alain Dagher1, Mallar Chakravarty4,5,6, Celia M T Greenwood2,7,8, Bratislav Misic1, David N Kennedy9, Jean-Baptiste Poline1,7. 1. Montreal Neurological Institute & Hospital, McGill University, Neurology and Neurosurgery, 3801 University Street, Montreal, H3A 2B4H3A 2B4, Montreal, QC, Canada. 2. Lady Davis Institute for Medical Research, McGill University, Montreal, QC, Canada. 3. Kimel Family Translational Imaging-Genetics Research Lab, Centre for Addiction and Mental Health, Toronto, ON, Canada. 4. Computational Brain Anatomy Laboratory, Douglas Mental Health Institute, Verdun, QC, Canada. 5. Department of Psychiatry, McGill University, Montreal, QC, Canada. 6. Department of Biomedical Engineering, McGill University, Montreal, QC, Canada. 7. Ludmer Centre for Neuroinformatics & Mental Health, McGill University, Montreal, QC, Canada. 8. Gerald Bronfman Department of Oncology; Department of Epidemiology, Biostatistics & Occupational Health Department of Human Genetics, McGill University, Montreal, QC, Canada. 9. Child and Adolescent Neurodevelopment Initiative, University of Massachusetts, Worcester, MA, USA.
Abstract
BACKGROUND: The choice of preprocessing pipeline introduces variability in neuroimaging analyses that affects the reproducibility of scientific findings. Features derived from structural and functional MRI data are sensitive to the algorithmic or parametric differences of preprocessing tasks, such as image normalization, registration, and segmentation to name a few. Therefore it is critical to understand and potentially mitigate the cumulative biases of pipelines in order to distinguish biological effects from methodological variance. METHODS: Here we use an open structural MRI dataset (ABIDE), supplemented with the Human Connectome Project, to highlight the impact of pipeline selection on cortical thickness measures. Specifically, we investigate the effect of (i) software tool (e.g., ANTS, CIVET, FreeSurfer), (ii) cortical parcellation (Desikan-Killiany-Tourville, Destrieux, Glasser), and (iii) quality control procedure (manual, automatic). We divide our statistical analyses by (i) method type, i.e., task-free (unsupervised) versus task-driven (supervised); and (ii) inference objective, i.e., neurobiological group differences versus individual prediction. RESULTS: Results show that software, parcellation, and quality control significantly affect task-driven neurobiological inference. Additionally, software selection strongly affects neurobiological (i.e. group) and individual task-free analyses, and quality control alters the performance for the individual-centric prediction tasks. CONCLUSIONS: This comparative performance evaluation partially explains the source of inconsistencies in neuroimaging findings. Furthermore, it underscores the need for more rigorous scientific workflows and accessible informatics resources to replicate and compare preprocessing pipelines to address the compounding problem of reproducibility in the age of large-scale, data-driven computational neuroscience.
BACKGROUND: The choice of preprocessing pipeline introduces variability in neuroimaging analyses that affects the reproducibility of scientific findings. Features derived from structural and functional MRI data are sensitive to the algorithmic or parametric differences of preprocessing tasks, such as image normalization, registration, and segmentation to name a few. Therefore it is critical to understand and potentially mitigate the cumulative biases of pipelines in order to distinguish biological effects from methodological variance. METHODS: Here we use an open structural MRI dataset (ABIDE), supplemented with the Human Connectome Project, to highlight the impact of pipeline selection on cortical thickness measures. Specifically, we investigate the effect of (i) software tool (e.g., ANTS, CIVET, FreeSurfer), (ii) cortical parcellation (Desikan-Killiany-Tourville, Destrieux, Glasser), and (iii) quality control procedure (manual, automatic). We divide our statistical analyses by (i) method type, i.e., task-free (unsupervised) versus task-driven (supervised); and (ii) inference objective, i.e., neurobiological group differences versus individual prediction. RESULTS: Results show that software, parcellation, and quality control significantly affect task-driven neurobiological inference. Additionally, software selection strongly affects neurobiological (i.e. group) and individual task-free analyses, and quality control alters the performance for the individual-centric prediction tasks. CONCLUSIONS: This comparative performance evaluation partially explains the source of inconsistencies in neuroimaging findings. Furthermore, it underscores the need for more rigorous scientific workflows and accessible informatics resources to replicate and compare preprocessing pipelines to address the compounding problem of reproducibility in the age of large-scale, data-driven computational neuroscience.
Authors: Elizabeth R Sowell; Paul M Thompson; Christiana M Leonard; Suzanne E Welcome; Eric Kan; Arthur W Toga Journal: J Neurosci Date: 2004-09-22 Impact factor: 6.167
Authors: Nicholas J Tustison; Philip A Cook; Arno Klein; Gang Song; Sandhitsu R Das; Jeffrey T Duda; Benjamin M Kandel; Niels van Strien; James R Stone; James C Gee; Brian B Avants Journal: Neuroimage Date: 2014-05-29 Impact factor: 6.556
Authors: David N Kennedy; Sanu A Abraham; Julianna F Bates; Albert Crowley; Satrajit Ghosh; Tom Gillespie; Mathias Goncalves; Jeffrey S Grethe; Yaroslav O Halchenko; Michael Hanke; Christian Haselgrove; Steven M Hodge; Dorota Jarecka; Jakub Kaczmarzyk; David B Keator; Kyle Meyer; Maryann E Martone; Smruti Padhy; Jean-Baptiste Poline; Nina Preuss; Troy Sincomb; Matt Travers Journal: Front Neuroinform Date: 2019-02-07 Impact factor: 4.081
Authors: Krzysztof Gorgolewski; Christopher D Burns; Cindee Madison; Dav Clark; Yaroslav O Halchenko; Michael L Waskom; Satrajit S Ghosh Journal: Front Neuroinform Date: 2011-08-22 Impact factor: 4.081
Authors: Adonay S Nunes; Vasily A Vakorin; Nataliia Kozhemiako; Nicholas Peatfield; Urs Ribary; Sam M Doesburg Journal: Sci Rep Date: 2020-07-06 Impact factor: 4.379
Authors: Ross D Markello; Aurina Arnatkeviciute; Jean-Baptiste Poline; Ben D Fulcher; Alex Fornito; Bratislav Misic Journal: Elife Date: 2021-11-16 Impact factor: 8.140
Authors: Dorit Kliemann; Ralph Adolphs; Tim Armstrong; Paola Galdi; David A Kahn; Tessa Rusch; A Zeynep Enkavi; Deuhua Liang; Steven Lograsso; Wenying Zhu; Rona Yu; Remya Nair; Lynn K Paul; J Michael Tyszka Journal: Sci Data Date: 2022-03-31 Impact factor: 6.444