What do you do when analyzing data is fun, but you don’t have any new data? You make it up.
This simulation tests the type I error rates of two-sample t-test in R and SAS. It demonstrates efficient methods for simulation, and it reminders the reader not to take the result of any single hypothesis test as gospel truth. That is, there is always a risk of a false positive (or false negative), so determining truth requires more than one research study.
A type I error is a false positive. That is, it happens when a hypothesis test rejects the null hypothesis when in fact it is not true. In this simulation the null hypothesis is true by design, though in the real world we cannot be sure the null hypothesis is true. This is why we write that we “fail to reject the null hypothesis” rather than “we accept it.” If there were no errors in the hypothesis tests in this simulation, we would never reject the null hypothesis, but by design it is normal to reject it according to alpha, the significance level. The de facto standard for alpha is 0.05.
First, we run a simulation in R by repeatedly comparing randomly-generated sets of normally-distributed values using the two-sample t-test. Notice the simulation is vectorized: there are no “for” loops that clutter the code and slow the simulation.
# type I error
alpha.p <- 0.05
# number of simulations
n.simulations <- 1000
# number of observations in each simulation
n.obs <- 100
# a vector of test results
# type I error for the whole simulation
# Store cumulative results in data frame for plotting
sim <- data.frame(
n.simulations = 1:n.simulations,
type.one.error.rate = cumsum(type.one.error) / seq_along(type.one.error))
# alternative plot using ggplot2
ggplot(sim, aes(x=n.simulations, y=type.one.error.rate)) +
xlab('Number of simulations') +
ylab('Cumulative type I error rate') +
ggtitle('Simulation of type I error in t-test') +
geom_abline(intercept = alpha.p, slope=0, col='red') +
Likewise, here is the equivalent code to do the same in SAS. Notice the simulation is implemented not as a slow SAS macro. Instead, it uses the BY statement in PROC TTEST.
Create a data set with 1000 simulations. Each simulation
has 100 observations in each of two groups.
length simulation 4 i 3; /* save space and time */
do simulation = 1 to 1000;
do i = 1 to 100;
/* The values are normally distributed */
x = rand('normal');
x = rand('normal');
Run two-sample t-test once for each simulation, and output to
a data set called ttests.
ods _all_ close;
ods output ttests=ttests;
proc ttest plots=none data=normal;
/* Limit the rows */
/* Define the error as a boolean */
type_one_error = probt<0.05;
/* cumulative error */
format cumulative_error_rate percent10.2;
label cumulative_error_rate = 'Cumulative error rate';
if simulation eq 1 then cumulative_error_count = 0;
cumulative_error_rate = cumulative_error_count /simulation;
/* Summarize the type I error rates for this simulation */
proc freq data=ttests;
/* Draw a line plot */
proc sgplot data=ttests;
series x=simulation y=cumulative_error_rate;
refline 0.05 /axis=y lineattrs=(color=red);
Did you notice the sawtooth pattern in the error rate? The incidence of a false positive is relatively rare, and when it happens, there is a spike in the error rate. Then for each simulation in which there is no false positive, the rate drops by a steady rate because the count of simulations (the denominator) is an integer.
This article was developed on Ubuntu 16.04 with R 3.4 and Windows 7 with SAS 9.4.
See also the article: Type I error rates in test of normality by simulation .