| Literature DB >> 35707440 |
Michael Cullan1, Scott Lidgard2, Beckett Sterner3.
Abstract
The Akaike Information Criterion (AIC) and related information criteria are powerful and increasingly popular tools for comparing multiple, non-nested models without the specification of a null model. However, existing procedures for information-theoretic model selection do not provide explicit and uniform control over error rates for the choice between models, a key feature of classical hypothesis testing. We show how to extend notions of Type-I and Type-II error to more than two models without requiring a null. We then present the Error Control for Information Criteria (ECIC) method, a bootstrap approach to controlling Type-I error using Difference of Goodness of Fit (DGOF) distributions. We apply ECIC to empirical and simulated data in time series and regression contexts to illustrate its value for parametric Neyman-Pearson classification. An R package implementing the bootstrap method is publicly available.Entities:
Keywords: Error statistics; Neyman–Pearson classification; bootstrap; hypothesis testing; non-nested models
Year: 2019 PMID: 35707440 PMCID: PMC9041880 DOI: 10.1080/02664763.2019.1701636
Source DB: PubMed Journal: J Appl Stat ISSN: 0266-4763 Impact factor: 1.416