# Blog Archives

## Introducing the p-hacker app: Train your expert p-hacking skills

June 21, 2016
By

Start the p-hacker app! My dear fellow scientists! “If you torture the data long enough, it will confess.” This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis. In fact, the art of

## Optional stopping does not bias parameter estimates (if done correctly)

April 15, 2016
By

tl;dr: Optional stopping does not bias parameter estimates from a frequentist point of view if all studies are reported (i.e., no publication bias exists) and effect sizes are appropriately meta-analytically weighted. Several recent discussions on the Psychological Methods Facebook group surrounded the question whether an optional stopping procedure leads to biased effect size estimates (see

## What’s the probability that a significant p-value indicates a true effect?

November 3, 2015
By

If the p-value is < .05, then the probability of falsely rejecting the null hypothesis is  <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate). Well, no. As you will see in a minute, the “false discovery rate” (aka. false-positive rate),

## A Compendium of Clean Graphs in R

March 12, 2015
By

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but

## What does a Bayes factor feel like?

January 29, 2015
By

A Bayes factor (BF) is a statistical index that quantifies the evidence for a hypothesis, compared to an alternative hypothesis (for introductions to Bayes factors, see here, here or here). Although the BF is a continuous measure of evidence, humans love verbal labels, categories, and benchmarks. Labels give interpretations of the objective index – and

## Reanalyzing the Schnall/Johnson “cleanliness” data sets: New insights from Bayesian and robust approaches

June 2, 2014
By

I want to present a re-analysis of the raw data from two studies that investigated whether physical cleanliness reduces the severity of moral judgments – from the original study (n = 40; Schnall, Benton, & Harvey, 2008), and from a direct replication (n = 208, Johnson, Cheung, & Donnellan, 2014). Both data sets are provided

## A comment on “We cannot afford to study effect size in the lab” from the DataColada blog

May 6, 2014
By

In a recent post on the DataColada blog, Uri Simonsohn wrote about “We cannot afford to study effect size in the lab“. The central message is: If we want accurate effect size (ES) estimates, we need large sample sizes (he suggests four-digit n’s). As this is hardly possible in the lab we have to use

## Interactive exploration of a prior’s impact

February 21, 2014
By

The probably most frequent criticism of Bayesian statistics sounds something like “It’s all subjective – with the ‘right’ prior, you can get any result you want.”. In order to approach this criticism it has been suggested to do a sensitivity analysis (or robustness analysis), that demonstrates how the choice of priors affects the conclusions drawn

## A short taxonomy of Bayes factors

January 21, 2014
By

I am starting to familiarize myself with Bayesian statistics. In this post I’ll show some insights I had concerning Bayes factors (BF). What are Bayes factors? Bayes factors provide a numerical value that quantifies how well a hypothesis predicts the empirical data relative to a competing hypothesis. For example, if the BF is 4, this

## New robust statistical functions in WRS package – Guest post by Rand Wilcox

September 16, 2013
By

Today a new version (0.23.1) of the WRS package (Wilcox’ Robust Statistics) has been released. This package is the companion to his rather exhaustive book on robust statistics, “Introduction to Robust Estimation and Hypothesis Testing” (Amazon Link de/us). For a fail-safe installation of the package, follow this instruction. As a guest post, Rand Wilcox describes