This is a screencast of my UseR! 2015 presentation: Tiny Data, Approximate Bayesian Computation and the Socks of Karl Broman. Based on the original blog post it is a quick’n’dirty introduction to approximate Bayesian computation (and is also, in a sense, an introduction to Bayesian statistics in general). Here it is, if you have 15 minutes to spare:
The video is short and makes a lot of simplifications/omissions, some which are:
- There are not just one, but many many different algorithms for doing approximate Bayesian computation, where the algorithm outlined in the video is called ABC rejection sampling. What make these approximate Bayesian computational methods (and not just Bayesian computational methods) is that they require, what I have called, a generative model and an acceptance criterion. What I call standard Bayesian computation in the video (but which is normally just called Bayesian computation) instead requires a function that calculates the likelihood given the data and some fixed parameter values.
- I mention in the video that approximate Bayesian computation is the slowest way you can fit a statistical model, and for many common statistical models this is the case. However for some models it might be very expensive to evaluate the likelihood, and in that case approximate Bayesian computation can actually be faster. As usual, it all depends on the context…
- I mention “drawing random parameter values from the prior”, or something similar, in the video. Invoking “randomness” always makes me a bit uneasy, and I just want to mention that the purpose of “drawing random parameters” is just to get a vector/list of parameter values that is a good enough representation of the prior probability distribution. It just happens to be the case that random number generators (like
rbeta) are a convenient way of creating such representative distributions.
For a Slower’n’Cleaner introduction to approximate Bayesian computation I would actually recommend the Wikipedia page, which is pretty good!