**The 20% Statistician**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

*In a previous post, I compared equivalence tests to Bayes factors, and pointed out several benefits of equivalence tests. But a much more logical comparison, and one I did not give enough attention to so far, is the ROPE procedure using Bayesian estimation. I’d like to thank John Kruschke for feedback on a draft of this blog post. Check out his own recent blog comparing ROPE to Bayes factors here.*

When we perform a study, we would like to conclude there is an effect, when there is an effect. But it is just as important to be able to conclude there is no effect, when there is no effect. I’ve recently published a paper that makes Frequentist equivalence tests (using the two-one-sided tests, or TOST, approach) as easy as possible (Lakens, 2017). Equivalence tests allow you to reject the presence of any effect you care about. In Bayesian estimation, one way to argue for the absence of a meaningful effect is the Region of Practical Equivalence (ROPE) procedure (Kruschke, 2014, chapter 12), which is “somewhat analogous to frequentist equivalence testing” (Kruschke & Liddell, 2017).

*if the 95 % HDI falls entirely inside the ROPE then we decide to accept the ROPE’d value for practical purposes*”. Note that the same HDI can also be used to reject the null hypothesis, where in Frequentist statistics, even though the confidence interval plays a similar role, you would still perform both a traditional

*t*-test and the TOST procedure.

In the code below, I randomly generate random normally distributed data (with means of 0 and a sd of 1) and perform the ROPE procedure and the TOST. The 95% HDI is from -0.10 to 0.42, and the 95% CI is from -0.11 to 0.41, with mean differences of 0.17 or 0.15.

**95% HDI vs 90% CI**

*How should we define “reasonably credible”? One way is by saying that any points within the 95% HDI are reasonably credible.*” There is not a strong justification for the use of a 95% HDI over a 96% of 93% HDI, except that it mirrors the familiar use of a 95% CI in Frequentist statistics. In Frequentist statistics, the 95% confidence interval is directly related to the 5% alpha level that is commonly deemed acceptable for a maximum Type 1 error rate (even though this alpha level is in itself a convention without strong justification).

*t*-test to examine whether the observed effect is statistically different from 0, while maintaining a 5% error rate (see also Senn, 2007, section 22.2.4)

**Power analysis**

**powerTOSTtwo.raw(alpha=0.025,statistical_power=0.875,low_eqbound=-0.5,high_eqbound=0.5,sdpooled=1)**

**Use of prior information**

**Conclusion**

*t*-test, but ROPE might be a great way to dip your toes in Bayesian waters and explore the many more things you can do with Bayesian posterior distributions.

**References**

*Journal of Experimental Psychology: General*,

*142*(2), 573–603. https://doi.org/10.1037/a0029146

*Doing Bayesian Data Analysis, Second Edition: A Tutorial with R, JAGS, and Stan*(2 edition). Boston: Academic Press.

*Psychonomic Bulletin & Review*. https://doi.org/10.3758/s13423-016-1221-4

*European Journal of Social Psychology*,

*44*(7), 701–710. https://doi.org/10.1002/ejsp.2023

*Social Psychological and Personality Science*.

*Statistical issues in drug development*(2nd ed). Chichester, England ; Hoboken, NJ: John Wiley & Sons.

**leave a comment**for the author, please follow the link and comment on their blog:

**The 20% Statistician**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.