| Literature DB >> 30808574 |
Danilo Bzdok1, John P A Ioannidis2.
Abstract
Recent decades have seen dramatic progress in brain research. These advances were often buttressed by probing single variables to make circumscribed discoveries, typically through null hypothesis significance testing. New ways for generating massive data fueled tension between the traditional methodology that is used to infer statistically relevant effects in carefully chosen variables, and pattern-learning algorithms that are used to identify predictive signatures by searching through abundant information. In this article we detail the antagonistic philosophies behind two quantitative approaches: certifying robust effects in understandable variables, and evaluating how accurately a built model can forecast future outcomes. We discourage choosing analytical tools via categories such as 'statistics' or 'machine learning'. Instead, to establish reproducible knowledge about the brain, we advocate prioritizing tools in view of the core motivation of each quantitative analysis: aiming towards mechanistic insight or optimizing predictive accuracy.Entities:
Keywords: big-data analytics; black box models; data science; deep learning; precision medicine; reproducibility
Mesh:
Year: 2019 PMID: 30808574 DOI: 10.1016/j.tins.2019.02.001
Source DB: PubMed Journal: Trends Neurosci ISSN: 0166-2236 Impact factor: 13.837