The Basics of Bayesian Statistics

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Bayesian Inference is a way of combining information from data with things we think we already know. For example, if we wanted to get an estimate of the mean height of people, we could use our prior knowledge that people are generally between 5 and 6 feet tall to inform the results from the data we collect. If our prior is informative and we don't have much data, this will help us to get a better estimate. If we have a lot of data, even if the prior is wrong (say, our population is NBA players), the prior won't change the estimate much. You might say that including such “subjective” information in a statistical model isn't right, but there's subjectivity in the selection of any statistical model. Bayesian Inference makes that subjectivity explicit.

Bayesian Inference can seem complicated, but as Brandon Rohrer explains, it's based on straighforward principles of conditional probability. Watch his video below for an elegant explanation of the basics.

If you'd like to try out some Bayesian statistics yourself, R has many packages for Bayesian Inference

Data Science and Robots Blog: How Bayesian inference works

 

 

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)