Literature DB >> 26754081

Design and Analysis of Monte Carlo Experiments: Attacking the Conventional Wisdom.

A Skrondal.   

Abstract

The design and analysis of Monte Carlo experiments, with special reference to structural equation modelling, is discussed in this article. These topics merit consideration, since the validity of the conclusions drawn from a Monte Carlo study clearly hinges on these features. It is argued that comprehensive Monte Carlo experiments can be implemented on a PC if the experiments are adequately designed. This is especially important when investigating modern computer intensive methodologies like resampling and Markov Chain Monte Carlo methods. We are faced with three fundamental challenges in Monte Carlo experimentation. The first problem is statistical precision, which concerns the reliability of the obtained results. External validity, on the other hand, depends on the number of experimental conditions, and is crucial for the prospects of generalising the results beyond the specific experiment. Finally, we face the constraint on available computer resources. The conventional wisdom in designing and analysing Monte Carlo experiments embodies no explicit specification of meta-model for analysing the output of the experiment, the use of case studies or full factorial designs as experimental plans, no use of variance reduction techniques, a large number of replications, and "eyeballing" of the results. A critical examination of the conventional wisdom is presented in this article. We suggest that the following alternative procedures should be considered. First of all, we argue that it is profitable to specify explicit meta-models, relating the chosen performance statistics and experimental conditions. Regarding the experimental plan, we recommend the use of incomplete designs, which will often result in considerable savings. We also consider the use of common random numbers in the simulation phase, since this may enhance the precision in estimating meta-models. The use of fewer replications per trial, enabling us to investigate an increased number of experimental conditions, should also be considered in order to improve the external validity at the cost of the conventionally excessive precision.

Year:  2000        PMID: 26754081     DOI: 10.1207/S15327906MBR3502_1

Source DB:  PubMed          Journal:  Multivariate Behav Res        ISSN: 0027-3171            Impact factor:   5.923


  10 in total

1.  Robustness of parameter and standard error estimates against ignoring a contextual effect of a subject-level covariate in cluster-randomized trials.

Authors:  Elly J H Korendijk; Joop J Hox; Mirjam Moerbeek; Cora J M Maas
Journal:  Behav Res Methods       Date:  2011-12

2.  Ability and Prior Distribution Mismatch: An Exploration of Common-Item Linking Methods.

Authors:  Brandon LeBeau
Journal:  Appl Psychol Meas       Date:  2017-05-18

3.  Collinear Latent Variables in Multilevel Confirmatory Factor Analysis: A Comparison of Maximum Likelihood and Bayesian Estimations.

Authors:  Seda Can; Rens van de Schoot; Joop Hox
Journal:  Educ Psychol Meas       Date:  2014-08-29       Impact factor: 2.821

Review 4.  A systematic review of the quality of reporting of simulation studies about methods for the analysis of complex longitudinal patient-reported outcomes data.

Authors:  Aynslie M Hinds; Tolulope T Sajobi; Véronique Sebille; Richard Sawatzky; Lisa M Lix
Journal:  Qual Life Res       Date:  2018-04-20       Impact factor: 4.147

5.  ClusterBootstrap: An R package for the analysis of hierarchical data using generalized linear models with the cluster bootstrap.

Authors:  Mathijs Deen; Mark de Rooij
Journal:  Behav Res Methods       Date:  2020-04

Review 6.  A review of fMRI simulation studies.

Authors:  Marijke Welvaert; Yves Rosseel
Journal:  PLoS One       Date:  2014-07-21       Impact factor: 3.240

7.  Using simulation studies to evaluate statistical methods.

Authors:  Tim P Morris; Ian R White; Michael J Crowther
Journal:  Stat Med       Date:  2019-01-16       Impact factor: 2.497

8.  What are the consequences of ignoring cross-loadings in bifactor models? A simulation study assessing parameter recovery and sensitivity of goodness-of-fit indices.

Authors:  Carmen Ximénez; Javier Revuelta; Raúl Castañeda
Journal:  Front Psychol       Date:  2022-08-18

9.  Required sample size to detect mediation in 3-level implementation studies.

Authors:  Nathaniel J Williams; Kristopher J Preacher; Paul D Allison; David S Mandell; Steven C Marcus
Journal:  Implement Sci       Date:  2022-10-01       Impact factor: 7.960

10.  A comparison of machine learning methods for classification using simulation with multiple real data examples from mental health studies.

Authors:  Mizanur Khondoker; Richard Dobson; Caroline Skirrow; Andrew Simmons; Daniel Stahl
Journal:  Stat Methods Med Res       Date:  2013-09-18       Impact factor: 3.021

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.