Literature DB >> 24239590

On the interpretation of weight vectors of linear models in multivariate neuroimaging.

Stefan Haufe1, Frank Meinecke2, Kai Görgen3, Sven Dähne4, John-Dylan Haynes5, Benjamin Blankertz6, Felix Bießmann7.   

Abstract

The increase in spatiotemporal resolution of neuroimaging devices is accompanied by a trend towards more powerful multivariate analysis methods. Often it is desired to interpret the outcome of these methods with respect to the cognitive processes under study. Here we discuss which methods allow for such interpretations, and provide guidelines for choosing an appropriate analysis for a given experimental goal: For a surgeon who needs to decide where to remove brain tissue it is most important to determine the origin of cognitive functions and associated neural processes. In contrast, when communicating with paralyzed or comatose patients via brain-computer interfaces, it is most important to accurately extract the neural processes specific to a certain mental state. These equally important but complementary objectives require different analysis methods. Determining the origin of neural processes in time or space from the parameters of a data-driven model requires what we call a forward model of the data; such a model explains how the measured data was generated from the neural sources. Examples are general linear models (GLMs). Methods for the extraction of neural information from data can be considered as backward models, as they attempt to reverse the data generating process. Examples are multivariate classifiers. Here we demonstrate that the parameters of forward models are neurophysiologically interpretable in the sense that significant nonzero weights are only observed at channels the activity of which is related to the brain process under study. In contrast, the interpretation of backward model parameters can lead to wrong conclusions regarding the spatial or temporal origin of the neural signals of interest, since significant nonzero weights may also be observed at channels the activity of which is statistically independent of the brain process under study. As a remedy for the linear case, we propose a procedure for transforming backward models into forward models. This procedure enables the neurophysiological interpretation of the parameters of linear backward models. We hope that this work raises awareness for an often encountered problem and provides a theoretical basis for conducting better interpretable multivariate neuroimaging analyses.
Copyright © 2013 The Authors. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Activation patterns; Decoding; EEG; Encoding; Extraction filters; Forward/backward models; Generative/discriminative models; Interpretability; Multivariate; Neuroimaging; Regularization; Sparsity; Univariate; fMRI

Mesh:

Year:  2013        PMID: 24239590     DOI: 10.1016/j.neuroimage.2013.10.067

Source DB:  PubMed          Journal:  Neuroimage        ISSN: 1053-8119            Impact factor:   6.556


  292 in total

1.  Resolving Ambiguities of MVPA Using Explicit Models of Representation.

Authors:  Thomas Naselaris; Kendrick N Kay
Journal:  Trends Cogn Sci       Date:  2015-10       Impact factor: 20.229

2.  Modulatory effects of ketamine, risperidone and lamotrigine on resting brain perfusion in healthy human subjects.

Authors:  Sergey Shcherbinin; Orla Doyle; Fernando O Zelaya; Sara de Simoni; Mitul A Mehta; Adam J Schwarz
Journal:  Psychopharmacology (Berl)       Date:  2015-07-31       Impact factor: 4.530

3.  A Distributed Neural Code in the Dentate Gyrus and in CA1.

Authors:  Fabio Stefanini; Lyudmila Kushnir; Jessica C Jimenez; Joshua H Jennings; Nicholas I Woods; Garret D Stuber; Mazen A Kheirbek; René Hen; Stefano Fusi
Journal:  Neuron       Date:  2020-06-09       Impact factor: 17.173

4.  Relevant feature set estimation with a knock-out strategy and random forests.

Authors:  Melanie Ganz; Douglas N Greve; Bruce Fischl; Ender Konukoglu
Journal:  Neuroimage       Date:  2015-08-10       Impact factor: 6.556

Review 5.  Role of deep learning in infant brain MRI analysis.

Authors:  Mahmoud Mostapha; Martin Styner
Journal:  Magn Reson Imaging       Date:  2019-06-20       Impact factor: 2.546

6.  Neural portraits of perception: reconstructing face images from evoked brain activity.

Authors:  Alan S Cowen; Marvin M Chun; Brice A Kuhl
Journal:  Neuroimage       Date:  2014-03-17       Impact factor: 6.556

7.  From Local Explanations to Global Understanding with Explainable AI for Trees.

Authors:  Scott M Lundberg; Gabriel Erion; Hugh Chen; Alex DeGrave; Jordan M Prutkin; Bala Nair; Ronit Katz; Jonathan Himmelfarb; Nisha Bansal; Su-In Lee
Journal:  Nat Mach Intell       Date:  2020-01-17

8.  Neuroimage-Based Consciousness Evaluation of Patients with Secondary Doubtful Hydrocephalus Before and After Lumbar Drainage.

Authors:  Jiayu Huo; Zengxin Qi; Sen Chen; Qian Wang; Xuehai Wu; Di Zang; Tanikawa Hiromi; Jiaxing Tan; Lichi Zhang; Weijun Tang; Dinggang Shen
Journal:  Neurosci Bull       Date:  2020-07-01       Impact factor: 5.203

9.  Individual prediction of chronic motor outcome in the acute post-stroke stage: Behavioral parameters versus functional imaging.

Authors:  Anne K Rehme; Lukas J Volz; Delia-Lisa Feis; Simon B Eickhoff; Gereon R Fink; Christian Grefkes
Journal:  Hum Brain Mapp       Date:  2015-08-19       Impact factor: 5.038

10.  Linking signal detection theory and encoding models to reveal independent neural representations from neuroimaging data.

Authors:  Fabian A Soto; Lauren E Vucovich; F Gregory Ashby
Journal:  PLoS Comput Biol       Date:  2018-10-01       Impact factor: 4.475

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.