# Type I error rates in two-sample t-test by simulation

**Heuristic Andrew**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

What do you do when analyzing data is fun, but you don’t have any new data? You make it up.

This simulation tests the type I error rates of two-sample t-test in R and SAS. It demonstrates efficient methods for simulation, and it reminders the reader not to take the result of any single hypothesis test as gospel truth. That is, there is always a risk of a false positive (or false negative), so determining truth requires more than one research study.

A type I error is a false positive. That is, it happens when a hypothesis test rejects the null hypothesis when in fact it is not true. In this simulation the null hypothesis is true by design, though in the real world we cannot be sure the null hypothesis is true. This is why we write that we “fail to reject the null hypothesis” rather than “we accept it.” If there were no errors in the hypothesis tests in this simulation, we would never reject the null hypothesis, but by design it is normal to reject it according to *alpha*, the significance level. The de facto standard for alpha is 0.05.

## R

First, we run a simulation in R by repeatedly comparing randomly-generated sets of normally-distributed values using the two-sample t-test. Notice the simulation is vectorized: there are no “for” loops that clutter the code and slow the simulation.

# type I error alpha.p <- 0.05 # number of simulations n.simulations <- 1000 # number of observations in each simulation n.obs <- 100 # a vector of test results type.one.error<-replicate(n.simulations, t.test(rnorm(n.obs),rnorm(n.obs), var.equal=TRUE)$p.value)## SAS

Likewise, here is the equivalent code to do the same in SAS. Notice the simulation is implemented not as a slow SAS macro. Instead, it uses the BY statement in PROC TTEST.

/* Create a data set with 1000 simulations. Each simulation has 100 observations in each of two groups. */ data normal; length simulation 4 i 3; /* save space and time */ do simulation = 1 to 1000; do i = 1 to 100; group='A'; /* The values are normally distributed */ x = rand('normal'); output; group='B'; x = rand('normal'); output; end; end; run; /* Run two-sample t-test once for each simulation, and output to a data set called ttests. */ ods _all_ close; ods output ttests=ttests; proc ttest plots=none data=normal; by simulation; class group; var x; run; data ttests; set ttests; /* Limit the rows */ if variances='Equal'; /* Define the error as a boolean */ type_one_error = probt## Sawtooth

Did you notice the sawtooth pattern in the error rate? The incidence of a false positive is relatively rare, and when it happens, there is a spike in the error rate. Then for each simulation in which there is no false positive, the rate drops by a steady rate because the count of simulations (the denominator) is an integer.

## Conclusion

This article was developed on Ubuntu 16.04 with R 3.4 and Windows 7 with SAS 9.4.

See also the article: Type I error rates in test of normality by simulation .

Toleave a commentfor the author, please follow the link and comment on their blog:Heuristic Andrew.

R-bloggers.com offersdaily e-mail updatesabout R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.