| Literature DB >> 28082857 |
Antonio Kolossa1, Bruno Kopp1.
Abstract
The aim of this study was to analyze how measurement error affects the validity of modeling studies in computational neuroscience. A synthetic validity test was created using simulated P300 event-related potentials as an example. The model space comprised four computational models of single-trial P300 amplitude fluctuations which differed in terms of complexity and dependency. The single-trial fluctuation of simulated P300 amplitudes was computed on the basis of one of the models, at various levels of measurement error and at various numbers of data points. Bayesian model selection was performed based on exceedance probabilities. At very low numbers of data points, the least complex model generally outperformed the data-generating model. Invalid model identification also occurred at low levels of data quality and under low numbers of data points if the winning model's predictors were closely correlated with the predictors from the data-generating model. Given sufficient data quality and numbers of data points, the data-generating model could be correctly identified, even against models which were very similar to the data-generating model. Thus, a number of variables affects the validity of computational modeling studies, and data quality and numbers of data points are among the main factors relevant to the issue. Further, the nature of the model space (i.e., model complexity, model dependency) should not be neglected. This study provided quantitative results which show the importance of ensuring the validity of computational modeling via adequately prepared studies. The accomplishment of synthetic validity tests is recommended for future applications. Beyond that, we propose to render the demonstration of sufficient validity via adequate simulations mandatory to computational modeling studies.Entities:
Keywords: computational modeling; design optimization; event-related potentials; functional brain imaging; model identifiability; signal-to-noise ratio; validity
Year: 2016 PMID: 28082857 PMCID: PMC5186787 DOI: 10.3389/fnins.2016.00573
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1Spearman correlations ρ (5) between predictors from the data-generating DIF model and predictors from the SQU and the MAR model, respectively. They are shown as a function of the number of data N and separately for noise conditions SNR [dB] ∈ {10, 8, 6, 4, 2, 0}.
Figure 2Explained variance .
Figure 3Exceedance probabilities φ for the DIF, SQU, MAR, and NUL models as a function of the number of data . The green areas depict the range of valid inference (i.e., the maximum exceedance probability is assigned to the data-generating DIF model) separately for each SNR condition. It can be seen that the range of valid inference shrinks with decreasing SNRs such that no valid inference remains possible at the lowest level of data quality (i.e., at 0 dB) within the given numbers of data points.