Site icon R-bloggers

Bayesian workflow: Prior determination, predictive checks and sensitivity analyses

[This article was first published on R on Pablo Bernabeu, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
  • This post presents a code-through of a Bayesian workflow in R, which can be reproduced using the materials at https://osf.io/gt5uf. The content is closely based on Bernabeu (2022), which was in turn based on lots of other references. In addition to those, you may wish to consider Nicenboim et al. (2023), a book in preparation that is already available online (https://vasishth.github.io/bayescogsci/book).

    In Bernabeu (2022), a Bayesian analysis was performed to complement the estimates that had been obtained in the frequentist analysis. Whereas the goal of the frequentist analysis had been hypothesis testing, for which \(p\) values were used, the goal of the Bayesian analysis was parameter estimation. Accordingly, we estimated the posterior distribution of every effect, without calculating Bayes factors (for other examples of the same estimation approach, see Milek et al., 2018; Pregla et al., 2021; Rodríguez-Ferreiro et al., 2020; for comparisons between estimation and hypothesis testing, see Cumming, 2014; Kruschke & Liddell, 2018; Rouder et al., 2018; Schmalz et al., 2021; Tendeiro & Kiers, 2019, in press; van Ravenzwaaij & Wagenmakers, 2021). In the estimation approach, the estimates are interpreted by considering the position of their credible intervals in relation to the expected effect size. That is, the closer an interval is to an effect size of 0, the smaller the effect of that predictor. For instance, an interval that is symmetrically centred on 0 indicates a very small effect, whereas—in comparison—an interval that does not include 0 at all indicates a far larger effect.

    This analysis served two purposes: first, to ascertain the interpretation of the smaller effects—which were identified as unreliable in the power analyses—, and second, to complement the estimates obtained in the frequentist analysis. The latter purpose was pertinent because the frequentist models presented convergence warnings—even though it must be noted that a previous study found that frequentist and Bayesian estimates were similar despite convergence warnings appearing in the frequentist analysis (Rodríguez-Ferreiro et al., 2020). Furthermore, the complementary analysis was pertinent because the frequentist models presented residual errors that deviated from normality—even though mixed-effects models are fairly robust to such a deviation (Knief & Forstmeier, 2021; Schielzeth et al., 2020). Owing to these precedents, we expected to find broadly similar estimates in the frequentist analyses and in the Bayesian ones. Across studies, each frequentist model has a Bayesian counterpart, with the exception of the secondary analysis performed in Study 2.1 (semantic priming) that included vision-based similarity as a predictor. The R package ‘brms’, Version 2.17.0, was used for the Bayesian analysis (Bürkner, 2018; Bürkner et al., 2022).

    Priors

    Priors are one of the hardest nuts to crack in Bayesian statistics. First, it can be useful to inspect what priors can be set in the model. Second, it is important to visualise a reasonable set of priors based on the available literature or any other available sources. Third, just before fitting the model, the adequacy of a range of priors should be assessed using prior predictive checks. Fourth, posterior predictive checks were performed to assess the consistency between the observed data and new data predicted by the posterior distributions. Fifth, the influence of the priors on the results should be assessed through a prior sensitivity analysis (Lee & Wagenmakers, 2014; Schoot et al., 2021; also see Bernabeu, 2022; Pregla et al., 2021; Rodríguez-Ferreiro et al., 2020; Stone et al., 2021; Stone et al., 2020).

    1. Checking what priors can be set

    The brms::get_prior function can be used to check what effects in the model can be assigned a prior. The output (see example) will include the current (perhaps default) prior on each effect.

    Models with a Gaussian distribution

    The figures below show the prior predictive checks for the Gaussian models. These plots show the maximum, mean and minimum values of the observed data (\(y\)) and those of the predicted distribution (\(y_{rep}\), which stands for replications of the outcome). The way of interpreting these plots is by comparing the observed data to the predicted distribution. The specifics of this comparison vary across the three plots. First, in the upper plot, which shows the maximum values, the ideal scenario would show the observed maximum value (\(y\)) overlapping with the maximum value of the predicted distribution (\(y_{rep}\)). Second, in the middle plot, showing the mean values, the ideal scenario would show the observed mean value (\(y\)) overlapping with the mean value of the predicted distribution (\(y_{rep}\)). Last, in the lower plot, which shows the minimum values, the ideal scenario would have the observed minimum value (\(y\)) overlapping with the minimum value of the predicted distribution (\(y_{rep}\)). While the overlap need not be absolute, the closer the observed and the predicted values are on the X axis, the better. As such, the three predictive checks below—corresponding to models that used the default Gaussian distribution—show that the priors fitted the data acceptably but not very well.

    See plot on GitHub

    Prior predictive checks for the Gaussian, informative prior model from the semantic priming study. \(y\) = observed data; \(y_{rep}\) = predicted data.

    See plot on GitHub

    Prior predictive checks for the Gaussian, weakly-informative prior model from the semantic priming study. \(y\) = observed data; \(y_{rep}\) = predicted data.

    See plot on GitHub

    Prior predictive checks for the Gaussian, diffuse prior model from the semantic priming study. \(y\) = observed data; \(y_{rep}\) = predicted data.

    Models with an exponentially-modified Gaussian (i.e., ex-Gaussian) distribution

    In contrast to the above results, the figures below demonstrate that, when an ex-Gaussian distribution was used, the priors fitted the data far better, which converged with the results of a similar comparison performed by Rodríguez-Ferreiro et al. (2020; see supplementary materials of the latter study).

    See plot on GitHub

    Prior predictive checks for the ex-Gaussian, informative prior model from the semantic priming study. \(y\) = observed data; \(y_{rep}\) = predicted data.

    See plot on GitHub

    Prior predictive checks for the ex-Gaussian, weakly-informative prior model from the semantic priming study. \(y\) = observed data; \(y_{rep}\) = predicted data.

    See plot on GitHub

    Prior predictive checks for the ex-Gaussian, diffuse prior model from the semantic priming study. \(y\) = observed data; \(y_{rep}\) = predicted data.


    4. Posterior predictive checks

    Based on the results from the prior predictive checks, the ex-Gaussian distribution was used in the final models. Next, posterior predictive checks were performed to assess the consistency between the observed data and new data predicted by the posterior distributions (Schoot et al., 2021). The figure below presents the posterior predictive checks for the latter models. The interpretation of these plots is simple: the distributions of the observed (\(y\)) and the predicted data (\(y_{rep}\)) should be as similar as possible. As such, the plots below suggest that the results are trustworthy.