[This article was first published on

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

**Publishable Stuff**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

This is a screencast of my UseR! 2015 presentation: Tiny Data, Approximate Bayesian Computation and the Socks of Karl Broman. Based on the original blog post it is a *quick’n’dirty* introduction to approximate Bayesian computation (and is also, in a sense, an introduction to Bayesian statistics in general). Here it is, if you have 15 minutes to spare:

## Regarding Quick’n’Dirty

The video is short and makes a lot of simplifications/omissions, some which are:

- There are not just one, but many many different algorithms for doing approximate Bayesian computation, where the algorithm outlined in the video is called ABC rejection sampling. What make these
(and not just**approximate**Bayesian computational methods*Bayesian computational methods*) is that they require, what I have called, a generative model and an acceptance criterion. What I callin the video (but which is normally just called**standard**Bayesian computation*Bayesian computation*) instead requires a function that calculates*the likelihood*given the data and some fixed parameter values. - I mention in the video that approximate Bayesian computation is the slowest way you can fit a statistical model, and for many common statistical models this is the case. However for some models it might be very expensive to evaluate the likelihood, and in that case approximate Bayesian computation can actually be faster. As usual, it all depends on the context…
- I mention “drawing random parameter values from the prior”, or something similar, in the video. Invoking “randomness” always makes me a bit uneasy, and I just want to mention that the purpose of “drawing random parameters” is just to get a vector/list of parameter values that is a good enough representation of the prior probability distribution. It just happens to be the case that random number generators (like
`rnbinom`

and`rbeta`

) are a convenient way of creating such representative distributions.

For a Slower’n’Cleaner introduction to approximate Bayesian computation I would actually recommend the Wikipedia page, which is pretty good!

To

**leave a comment**for the author, please follow the link and comment on their blog:**Publishable Stuff**.R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.