Blog Archives

Introducing the p-hacker app: Train your expert p-hacking skills

June 21, 2016
By
Introducing the p-hacker app: Train your expert p-hacking skills

  Start the p-hacker app! My dear fellow scientists! “If you torture the data long enough, it will confess.” This aphorism, attributed to Ronald Coase, sometimes has been used in a disrespective manner, as if it was wrong to do creative data analysis. In fact, the art of

Read more »

Optional stopping does not bias parameter estimates (if done correctly)

April 15, 2016
By
Optional stopping does not bias parameter estimates (if done correctly)

tl;dr: Optional stopping does not bias parameter estimates from a frequentist point of view if all studies are reported (i.e., no publication bias exists) and effect sizes are appropriately meta-analytically weighted. Several recent discussions on the Psychological Methods Facebook group surrounded the question whether an optional stopping procedure leads to biased effect size estimates (see

Read more »

What’s the probability that a significant p-value indicates a true effect?

November 3, 2015
By
What’s the probability that a significant p-value indicates a true effect?

If the p-value is < .05, then the probability of falsely rejecting the null hypothesis is  <5%, right? That means, a maximum of 5% of all significant results is a false-positive (that’s what we control with the α rate). Well, no. As you will see in a minute, the “false discovery rate” (aka. false-positive rate),

Read more »

A Compendium of Clean Graphs in R

March 12, 2015
By
A Compendium of Clean Graphs in R

Every data analyst knows that a good graph is worth a thousand words, and perhaps a hundred tables. But how should one create a good, clean graph? In R, this task is anything but

Read more »

What does a Bayes factor feel like?

January 29, 2015
By
What does a Bayes factor feel like?

A Bayes factor (BF) is a statistical index that quantifies the evidence for a hypothesis, compared to an alternative hypothesis (for introductions to Bayes factors, see here, here or here). Although the BF is a continuous measure of evidence, humans love verbal labels, categories, and benchmarks. Labels give interpretations of the objective index – and

Read more »

Reanalyzing the Schnall/Johnson “cleanliness” data sets: New insights from Bayesian and robust approaches

June 2, 2014
By
Reanalyzing the Schnall/Johnson “cleanliness” data sets: New insights from Bayesian and robust approaches

I want to present a re-analysis of the raw data from two studies that investigated whether physical cleanliness reduces the severity of moral judgments – from the original study (n = 40; Schnall, Benton, & Harvey, 2008), and from a direct replication (n = 208, Johnson, Cheung, & Donnellan, 2014). Both data sets are provided

Read more »

A comment on “We cannot afford to study effect size in the lab” from the DataColada blog

May 6, 2014
By
A comment on “We cannot afford to study effect size in the lab” from the DataColada blog

In a recent post on the DataColada blog, Uri Simonsohn wrote about “We cannot afford to study effect size in the lab“. The central message is: If we want accurate effect size (ES) estimates, we need large sample sizes (he suggests four-digit n’s). As this is hardly possible in the lab we have to use

Read more »

Interactive exploration of a prior’s impact

February 21, 2014
By
Interactive exploration of a prior’s impact

The probably most frequent criticism of Bayesian statistics sounds something like “It’s all subjective – with the ‘right’ prior, you can get any result you want.”. In order to approach this criticism it has been suggested to do a sensitivity analysis (or robustness analysis), that demonstrates how the choice of priors affects the conclusions drawn

Read more »

A short taxonomy of Bayes factors

January 21, 2014
By
A short taxonomy of Bayes factors

I am starting to familiarize myself with Bayesian statistics. In this post I’ll show some insights I had concerning Bayes factors (BF). What are Bayes factors? Bayes factors provide a numerical value that quantifies how well a hypothesis predicts the empirical data relative to a competing hypothesis. For example, if the BF is 4, this

Read more »

New robust statistical functions in WRS package – Guest post by Rand Wilcox

September 16, 2013
By
New robust statistical functions in WRS package – Guest post by Rand Wilcox

Today a new version (0.23.1) of the WRS package (Wilcox’ Robust Statistics) has been released. This package is the companion to his rather exhaustive book on robust statistics, “Introduction to Robust Estimation and Hypothesis Testing” (Amazon Link de/us). For a fail-safe installation of the package, follow this instruction. As a guest post, Rand Wilcox describes

Read more »

Sponsors

Mango solutions



RStudio homepage



Zero Inflated Models and Generalized Linear Mixed Models with R

Quantide: statistical consulting and training



http://www.eoda.de







ODSC

CRC R books series











Contact us if you wish to help support R-bloggers, and place your banner here.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)