My Statistical Power and Significance Testing Visualization now lets you vary effect size, sample size, power and significance level. There's also a new feature to rescale the plot and by clicking-and-dragging you can pan the visualization.

My Statistical Power and Significance Testing Visualization now lets you vary effect size, sample size, power and significance level. There's also a new feature to rescale the plot and by clicking-and-dragging you can pan the visualization.

I often get asked how to fit different multilevel models (or individual growth models, hierarchical linear models or linear mixed-models, etc.) in R. In this guide I have compiled some of the more common and/or useful models (at least common in clinical psychology), and how to fit them using nlme::lme() and lme4::lmer(). I will cover the common two-level random...

The double-blinded placebo-controlled randomized trial have long been held as the gold standard in pharmacological research. Unfortunately, this design is impossible to mimic in clinical psychology. Even if we — at best — could try to keep the participants blinded to their treatment allocation, it would be rather hard to blind therapists to what therapy they are giving. The...

Here is a new visualization created in the same manner as my Cohen’s d vizualisation. This new visualization is an interactive display of classical null hypothesis significance testing and statistical power. The visualization should work on mobile phones and tablets, but it requires a modern browser that supports SVG. Check...

Earlier this week I read this article about “Why Publishing Everything Is More Effective than Selective Publishing of Statistically Significant Results” by Mercal et al (2014). The authors simulated different meta-analytic scenarios and came to the conclusion that publishing everything is more effective for the scientific collective. This got me thinking about...

In this post I will use the theoretical and empirical sampling distribution of Cohen's d to show the expected overestimation due to selective publishing. I will look at the overestimation for various sample sizes when the population effect is 0, 0.2, 0.5 and 0.8. The conclusion is that you should be weary of effect sizes from small samples, and...

e-mails with the latest R posts.

(You will not see this message again.)