**Nicebread » R**, and kindly contributed to R-bloggers)

I am starting to familiarize myself with Bayesian statistics. In this post I’ll show some insights I had concerning *Bayes factors* (BF).

## What are Bayes factors?

Bayes factors provide a numerical value that quantifies how well a hypothesis predicts the empirical data relative to a competing hypothesis. For example, if the BF is 4, this indicates: “This empirical data is 4 times more probable if H₁ were true than if H₀ were true.” . Hence, evidence points towards H₁. A BF of 1 means that data are equally likely to be occured under both hypotheses. In this case, it would be impossible to decide between both.

More formally, the BF can be written as:

\(BF_{10} = \frac{p(D|H_1)}{p(D|H_0)}\)

where D is the data.

(for more formal introductions to BFs, see Wikipedia, Bayesian-Inference, or the classic paper by Kass and Raftery, 1995).

Hence, the BF is a ratio of probabilities, and is related to larger class of likelihood-ratio test.

Although many authors agree about the many theoretical advantages of BFs, until recently it was complicated and unclear how to compute a BF even for the simplest standard designs (Rouder, Morey, Speckman, & Province, 2012). Fortunately, over the last years *default Bayes factors* for several standard designs have been developed (Rouder et al., 2012; Rouder, Speckman, Sun, Morey, & Iverson, 2009; Morey & Rouder, 2011). For example, for a two-sample *t* test, a BF can be derived simply by plugging the *t* value and the sample sizes into a formula. The BF is easy to compute by the R package BayesFactor (Morey & Rouder, 2013), or by online calculators [1][2].

## Flavors of Bayes factors

When I started to familiarize myself with BFs, I was sometimes confused, as the same number seemed to mean different things in different publications. And indeed, **four types of Bayes factors** can be distinguished. “Under the hood”, all four types are identical, but you have to be aware which type has been employed in the specific case.

The first distinction is, whether the BF indicates “\(H_0\) over \(H_1\)” (=\(BF_{01}\)), or “\(H_1\) over \(H_0\)” (=\(BF_{10}\)). A \(BF_{01}\) of 2 means “Data is 2 times more likely to be occured under \(H_0\) than under \(H_1\)”, while the same situation would be a \(BF_{10}\) of 0.5 (i.e., the reciprocal value 1/2). Intuitively, I prefer larger values to be “better”, and as I usually would like to have evidence for \(H_1\) instead of \(H_0\), I prefer the \(BF_{10}\).

The second distinction is, whether one reports the raw BF, or the natural logarithm of the BF. The logarithm has the advantage that the scale above 1 (evidence for \(H_1\)) is identical to the scale below 1 (evidence for \(H_0\)). In the previous example, a \(BF_{10}\) of 2 is equivalent to a \(BF_{01}\) of 0.5. Taking the log of both values leeds to \(log(BF_{10})\) = 0.69 and \(log(BF_{01})\) = -0.69: Same value, reversed sign. This makes the log(BF) ideal for visualization, as the scale is linear in both directions. Following graphic shows the relationship between raw/log BF:

As you can see in the Table of Figure 1, different authors use different flavors. Notably, the website calculator gives “raw-” BF, while the BayesFactor package gives “log+” BF. (That caused me some headache … To be clear: The help page of ?ttest.tstat clearly states that it computes the “log(e) Bayes factor (against the point null)”. Once you understand the system, everything falls into place; but it took me some time to figure out how to understand and convert the different metrics.)

Figure 2 shows conversion paths of the different BF flavors:

For example, if you enter t = 2.51, n = 100 into the website, you get a JZS Bayes Factor = 0.6140779. The same numbers in the BayesFactor package give a BF of 0.4876335:

[cc lang=”rsplus” escaped=”true”]

> ttest.tstat(t=2.51, n1=100, r=1)

$bf

[1] 0.4876335

# now apply the conversion formula 1/exp(BF)

> 1/(exp(0.4876335))

[1] 0.6140779

[/cc]

As you can see: After applying the conversion, it is exactly the same.

Related posts: Exploring the robustness of Bayes Factors: A convenient plotting function

### References

Morey, R. D. & Rouder, J. N. (2011). Bayes factor approaches for testing interval null hypotheses. Psychological Methods, 16(4), 406–419. PMID: 21787084. doi:10.1037/a0024377

Morey, R. D. & Rouder, J. N. (2013). {BAYESFACTOR}: computation of bayes factors for common designs. R package version 0.9.4. Retrieved from http://CRAN.R- project.org/package=BayesFactor

Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56(5), 356–374. doi:10.1016/j.jmp.2012.08.001

Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237.

**leave a comment**for the author, please follow the link and comment on their blog:

**Nicebread » R**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...