Blog Archives

Announcing pingr: The R Package that Sounds as it is Called

January 26, 2014
By
Announcing pingr: The R Package that Sounds as it is Called

pingr is an R package that contains one function, ping(), with one purpose: To go ping on whatever platform you are on (thanks to the audio package). It is intended to be useful, for example, if you are running a long analysis in the background and want to know when it is ready. It’s also useful if...

Read more »

Bayesian First Aid: Binomial Test

January 20, 2014
By
Bayesian First Aid: Binomial Test

The binomial test is arguably the conceptually simplest of all statistical tests: It has only one parameter and an easy to understand distribution for the data. When introducing null hypothesis significance testing it is puzzling that the binomial test is not the first example of a test but sometimes is introduced long after the t-test and the ANOVA...

Read more »

Bayesian First Aid

January 10, 2014
By
Bayesian First Aid

So I have a secret project. Come closer. I’m developing an R package that implements Bayesian alternatives to the most commonly used statistical tests. Yes you heard me, soon your t.testing days might be over! The package aims at being as easy as possible to pick up and use, especially if you are already used to the classical .test...

Read more »

An Animation of the Construction of a Confidence Interval

December 30, 2013
By
An Animation of the Construction of a Confidence Interval

I’m playing blog ping-pong with John Kruschke’s Doing Bayesian Data Analysis blog as he was partly inspired by my silly post on Bayesian mascots when writing a nice piece on Icons for the essence of Bayesian and frequentist data analysis. That piece, in turn, inspired me resulting in the following wobbly backhand. The confidence interval...

Read more »

The Mascots of Bayesian Statistics

December 25, 2013
By
The Mascots of Bayesian Statistics

Why would Bayesian statistics need a mascot/symbol/logo? Well, why not? I don’t know of any other branch of statistics that has a mascot but many programming languages have. R has an “R”, Python has a python snake, and Go has an adorable little gopher. While Bayesian statistics isn’t a programming language it could be characterized as...

Read more »

An Animation of the t Distribution as a Mixture of Normals

December 7, 2013
By
An Animation of the t Distribution as a Mixture of Normals

You’ve probably heard about the t distribution. One good use for this distribution is as an alternative to the normal distribution that is more robust against outliers. But where does the t distribution come from? One intuitive characterization of the t is as a mixture of normal distributions. More specifically, as a mixture of an infinite number of...

Read more »

Shaping up Laplace Approximation using Importance Sampling

December 2, 2013
By
Shaping up Laplace Approximation using Importance Sampling

In the last post I showed how to use Laplace approximation to quickly (but dirtily) approximate the posterior distribution of a Bayesian model coded in R. This is just a short follow up where I show how to use importance sampling as an easy method to shape up the Laplace approximation in order to approximate the true...

Read more »

Easy Laplace Approximation of Bayesian Models in R

November 21, 2013
By
Easy Laplace Approximation of Bayesian Models in R

Thank you for tuning in! In this post, a continuation of Three Ways to Run Bayesian Models in R, I will: Handwave an explanation of the Laplace Approximation, a fast and (hopefully not too) dirty method to approximate the posterior of a Bayesian model. Show that it is super easy to do Laplace approximation in R, basically four...

Read more »

How Do You Write Your Model Definitions?

October 20, 2013
By
How Do You Write Your Model Definitions?

I’m often irritated by that when a statistical method is explained, such as linear regression, it is often characterized by how it can be calculated rather than by what model is assumed and fitted. A typical example of this is that linear regression is often described as a method that uses ordinary least squares to calculate the best...

Read more »

A Bayesian Twist on Tukey’s Flogs

September 30, 2013
By
A Bayesian Twist on Tukey’s Flogs

In the last post I described flogs, a useful transform on proportions data introduced by John Tukey in his Exploratory Data Analysis. Flogging a proportion (such as, two out of three computers were Macs) consisted of two steps: first we “started” the proportion by adding 1/6 to each of the counts and then we “folded” it...

Read more »