| Literature DB >> 29200501 |
Quentin F Gronau1, Alexandra Sarafoglou1, Dora Matzke1, Alexander Ly1, Udo Boehm1, Maarten Marsman1, David S Leslie2, Jonathan J Forster3, Eric-Jan Wagenmakers1, Helen Steingroever1.
Abstract
The marginal likelihood plays an important role in many areas of Bayesian statistics such as parameter estimation, model comparison, and model averaging. In most applications, however, the marginal likelihood is not analytically tractable and must be approximated using numerical methods. Here we provide a tutorial on bridge sampling (Bennett, 1976; Meng & Wong, 1996), a reliable and relatively straightforward sampling method that allows researchers to obtain the marginal likelihood for models of varying complexity. First, we introduce bridge sampling and three related sampling methods using the beta-binomial model as a running example. We then apply bridge sampling to estimate the marginal likelihood for the Expectancy Valence (EV) model-a popular model for reinforcement learning. Our results indicate that bridge sampling provides accurate estimates for both a single participant and a hierarchical version of the EV model. We conclude that bridge sampling is an attractive method for mathematical psychologists who typically aim to approximate the marginal likelihood for a limited set of possibly high-dimensional models.Entities:
Keywords: Bayes factor; Hierarchical model; Marginal likelihood; Normalizing constant; Predictive accuracy; Reinforcement learning
Year: 2017 PMID: 29200501 PMCID: PMC5699790 DOI: 10.1016/j.jmp.2017.09.005
Source DB: PubMed Journal: J Math Psychol ISSN: 0022-2496 Impact factor: 2.223
Fig. 1Prior and posterior distribution for the rate parameter from the beta-binomial model. The prior on the rate parameter is represented by the dotted line; the posterior distribution is represented by the solid line and was obtained after having observed 2 correct responses out of 10 trials. Available at https://tinyurl.com/yc8bw98v under CC license https://creativecommons.org/licenses/by/2.0/.
Fig. 2Illustration of the naive Monte Carlo estimator for the beta-binomial example. The dotted line represents the prior distribution and the solid line represents the posterior distribution that was obtained after having observed 2 correct responses out of 10 trials. The gray dots represent the 12 samples randomly drawn from the prior distribution. Available at https://tinyurl.com/y8uf6t8f under CC license https://creativecommons.org/licenses/by/2.0/.
Fig. 3Illustration of the importance sampling estimator for the beta-binomial model. The dashed line represents our beta mixture importance density and the solid gray line represents the posterior distribution that was obtained after having observed 2 correct responses out of 10 trials. The gray dots represent the 12 samples randomly drawn from our beta mixture importance density. Available at https://tinyurl.com/yc7ho7hr under CC license https://creativecommons.org/licenses/by/2.0/.
Fig. 4Illustration of the generalized harmonic mean estimator for the beta-binomial model. The solid line represents the probit-transformed posterior distribution that was obtained after having observed 2 correct responses out of 10 trials, and the dashed line represents the importance density . The gray dots represent the 12 probit-transformed samples randomly drawn from the posterior distribution. Available at https://tinyurl.com/yazgk8kj under CC license https://creativecommons.org/licenses/by/2.0/.
Fig. 5Schematic illustration of the steps involved in obtaining the bridge sampling estimate of the marginal likelihood. Available at https://tinyurl.com/y7b2kze7 under CC license https://creativecommons.org/licenses/by/2.0/.
Summary of the Payoff Scheme of the Traditional IGT as Developed by Bechara et al. (1994).
| Deck A | Deck B | Deck C | Deck D | |
|---|---|---|---|---|
| Bad deckwith infrequent losses | Bad deckwith frequent losses | Good deckwith infrequent losses | Good deckwith frequent losses | |
| Reward/trial | 100 | 100 | 50 | 50 |
| Number of losses/10 cards | 5 | 1 | 5 | 1 |
| Loss/10 cards | ||||
| Net outcome/10 cards | 250 | 250 |
Fig. 6Comparison of the log marginal likelihoods obtained with bridge sampling (-axis) and importance sampling reported by Steingroever et al. (2016) (-axis). The main diagonal indicates perfect correspondence between the two methods. Available at https://tinyurl.com/yac3o8qs under CC license https://creativecommons.org/licenses/by/2.0/.
Bayes factors comparing the full EV model to the restricted EV models, log marginal likelihoods, and coefficient of variation (with respect to the marginal likelihood) expressed as a percentage.
| Model | Bayes Factor | Log Marginal Likelihood | |
|---|---|---|---|
| Full model | – | 10.13 | |
| Restricted at | 1.202 | 16.44 | |
| Restricted at | 1.052 | 9.71 | |
| Restricted at | 1.068 | 12.03 |
Fig. 7Prior and posterior distribution of the group-level mean in the Busemeyer and Stout (2002) data set. The figure shows the posterior distribution (solid line) and the prior distribution (dotted line). The gray dot indicates the intersection of the prior and the posterior distributions, for which the Savage–Dickey Bayes factor equals . Available at https://tinyurl.com/y7cyxclq under CC license https://creativecommons.org/licenses/by/2.0/.
Summary of the bridge sampling estimators for the marginal likelihood, and its special cases: the naive Monte Carlo, importance sampling, and generalized harmonic mean estimator.
| Method | Estimator | Samples | Bridge Function |
|---|---|---|---|
| Bridge sampling | |||
| Naive Monte Carlo | |||
| Importance sampling | |||
| Generalized harmonic mean |
Note. is the prior distribution, is the importance density, is the posterior distribution, is the proposal distribution, is the bridge function, and is a constant. The last column shows the bridge function needed to obtain the special cases.