[This article was first published on Heuristic Andrew, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

What do you do when analyzing data is fun, but you don’t have any new data? You make it up.

This simulation tests the type I error rates of two-sample t-test in R and SAS. It demonstrates efficient methods for simulation, and it reminders the reader not to take the result of any single hypothesis test as gospel truth. That is, there is always a risk of a false positive (or false negative), so determining truth requires more than one research study.

A type I error is a false positive. That is, it happens when a hypothesis test rejects the null hypothesis when in fact it is not true. In this simulation the null hypothesis is true by design, though in the real world we cannot be sure the null hypothesis is true. This is why we write that we “fail to reject the null hypothesis” rather than “we accept it.” If there were no errors in the hypothesis tests in this simulation, we would never reject the null hypothesis, but by design it is normal to reject it according to alpha, the significance level. The de facto standard for alpha is 0.05.

## R

First, we run a simulation in R by repeatedly comparing randomly-generated sets of normally-distributed values using the two-sample t-test. Notice the simulation is vectorized: there are no “for” loops that clutter the code and slow the simulation.

# type I error
alpha.p <- 0.05

# number of simulations
n.simulations <- 1000

# number of observations in each simulation
n.obs <- 100

# a vector of test results
type.one.error<-replicate(n.simulations, t.test(rnorm(n.obs),rnorm(n.obs),
var.equal=TRUE)\$p.value)   SASLikewise, here is the equivalent code to do the same in SAS. Notice the simulation is implemented not as a slow SAS macro. Instead, it uses the BY statement in PROC TTEST. /*
Create a data set with 1000 simulations. Each simulation
has 100 observations in each of two groups.
*/
data normal;
length simulation 4 i 3; /* save space and time */
do simulation = 1 to 1000;
do i = 1 to 100;
group='A';
/* The values are normally distributed */
x = rand('normal');
output;
group='B';
x = rand('normal');
output;
end;
end;
run;

/*
Run two-sample t-test once for each simulation, and output to
a data set called ttests.
*/
ods _all_ close;
ods output ttests=ttests;
proc ttest plots=none data=normal;
by simulation;
class group;
var x;
run;

data ttests;
set ttests;

/* Limit the rows */
if variances='Equal';

/* Define the error as a boolean */
type_one_error = probt   SawtoothDid you notice the sawtooth pattern in the error rate? The incidence of a false positive is relatively rare, and when it happens, there is a spike in the error rate. Then for each simulation in which there is no false positive, the rate drops by a steady rate because the count of simulations (the denominator) is an integer. Conclusion This article was developed on Ubuntu 16.04 with R 3.4 and Windows 7 with SAS 9.4. See also the article:  Type I error rates in test of normality by simulation .For more posts like this, see Heuristic Andrew.

var vglnk = {key: '949efb41171ac6ec1bf7f206d57e90b8'};
(function(d, t) {
var s = d.createElement(t);
s.type = 'text/javascript';
s.async = true;
// s.defer = true;
//          s.src = '//cdn.viglink.com/api/vglnk.js';
var r = d.getElementsByTagName(t)[0];
r.parentNode.insertBefore(s, r);
}(document, 'script'));

Related
ShareTweet