# Articles by Daniel Lakens

### ROPE and Equivalence Testing: Practically Equivalent?

February 12, 2017 |

In a previous post, I compared equivalence tests to Bayes factors, and pointed out several benefits of equivalence tests. But a much more logical comparison, and one I did not give enough attention to so far, is the ROPE procedure using Bayesian estimation. I’d like to thank John Kruschke ...

### Why Type 1 errors are more important than Type 2 errors (if you care about evidence)

December 18, 2016 |

### TOST equivalence testing R package (TOSTER) and spreadsheet

December 9, 2016 |

### Why Within-Subject Designs Require Fewer Participants than Between-Subject Designs

November 12, 2016 |

### Dance of the Bayes factors

July 18, 2016 |

You might have seen the ‘Dance of the p-values’ video by Geoff Cumming (if not, watch it here). Because p-values and the default Bayes factors (Rouder, Speckman, Sun, Morey, & Iverson, 2009) are both calculated directly from t-values and sample sizes, we might expect there is also a Dance of the Bayes ...

### Dance of the Bayes factors

July 18, 2016 |

You might have seen the ‘Dance of the p-values’ video by Geoff Cumming (if not, watch it here). Because p-values and the default Bayes factors (Rouder, Speckman, Sun, Morey, & Iverson, 2009) are both calculated directly from t-values and sample sizes, we might expect there is also a Dance of the Bayes ...

### Absence of evidence is not evidence of absence: Testing for equivalence

May 20, 2016 |

See the follow up post where I introduce my R package and spreadsheet TOSTER to perform TOST equivalence tests, and link to a practical primer on this topic.  When you find p __ 0.05, you did not observe surprising data, assuming there is no true effect. You can often read in the ...

### Absence of evidence is not evidence of absence: Testing for equivalence

May 20, 2016 |

When you find p __ 0.05, you did not observe surprising data, assuming there is no true effect. You can often read in the literature how p __ 0.05 is interpreted as ‘no effect’ but due to a lack of power the data might not be surprising if there was an effect. In this ...

### One-sided F-tests and halving p-values

April 7, 2016 |

After my previous post about one-sided tests, some people wondered about two-sided F-tests. And then Dr R recently tweeted: No, there is no such thing as a one-tailed p-value for an F-test. reported F(1,40)=3.72, p=.03; correct p=.06 use t-test for one-tailed. — R-Index (@R__INDEX) April 5, 2016 I thought it would be ...

### One-sided F-tests and halving p-values

April 7, 2016 |

After my previous post about one-sided tests, some people wondered about two-sided F-tests. And then Dr R recently tweeted: No, there is no such thing as a one-tailed p-value for an F-test. reported F(1,40)=3.72, p=.03; correct p=.06 use t-test for one-tailed.— R-Index (@R__INDEX) April 5, 2016I thought it would be ...

### The difference between a confidence interval and a capture percentage

March 1, 2016 |

I was reworking a lecture on confidence intervals I’ll be teaching, when I came across a perfect real life example of a common error people make when interpreting confidence intervals. I hope everyone (Harvard Professors, Science editors, my bachelo...

March 1, 2016 |

### The correlation between original and replication effect sizes might be spurious

January 29, 2016 |

In the reproducibility project, original effect sizes correlated r=0.51 with the effect sizes of replications. Some researchers find this hopeful.Less-popularised findings from the "estimating the reproducibility" paper @Eli_Finkel #SPSP2016 pic.twitte...

### The correlation between original and replication effect sizes might be spurious

January 29, 2016 |

In the reproducibility project, original effect sizes correlated r=0.51 with the effect sizes of replications. Some researchers find this hopeful.Less-popularised findings from the "estimating the reproducibility" paper @Eli_Finkel #SPSP2016 pic.twitter.com/8CFJMbRhi8— Jessie Sun (@JessieSunPsych) January 28, 2016I don’t think we should be interpreting this correlation at ...

### Power analysis for default Bayesian t-tests

January 14, 2016 |

One important benefit of Bayesian statistics is that you can provide relative support for the null hypothesis. When the null hypothesis is true, p-values will forever randomly wander between 0 and 1, but a Bayes factor has consistency (Rouder, Speckman, Sun, Morey, & Iverson, 2009), which means that as the sample size increases, the ...

### Power analysis for default Bayesian t-tests

January 14, 2016 |

One important benefit of Bayesian statistics is that you can provide relative support for the null hypothesis. When the null hypothesis is true, p-values will forever randomly wander between 0 and 1, but a Bayes factor has consistency (Rouder, Speckman, Sun, Morey, & Iverson, 2009), which means that as the sample size increases, the ...

### Error Control in Exploratory ANOVA’s: The How and the Why

January 1, 2016 |

### Error Control in Exploratory ANOVA’s: The How and the Why

January 1, 2016 |

In a 2X2X2 design, there are three main effects, three two-way interactions, and one three-way interaction to test. That’s 7 statistical tests.The probability of making at least one Type 1 error in a single ANOVA is 1-(0.95)^7=30%. There are earlier blog posts on this, but my eyes were not ...

### Plotting Scopus article level citation data in R

December 13, 2015 |