This simulation tests the type I error rates of the Shapiro-Wilk test of normality in R and SAS.
First, we run a simulation in R. Notice the simulation is vectorized: there are no “for” loops that clutter the code and slow the simulation.
# type I error alpha <- 0.05 # number of simulations n.simulations <- 10000 # number of observations in each simulation n.obs <- 100 # a vector of test results type.one.error <- replicate(n.simulations, shapiro.test(rnorm(n.obs))$p.value)
As the number of simulations increases, the type I error rate approaches alpha. Try it in R with any value of alpha and any number of observations per simulation.
It's elegant the whole simulation can be condensed to 60 characters:mean(replicate(10000,shapiro.test(rnorm(100))$p.value)<0.05)
Likewise, we now do a similar simulation of the Shapiro-Wilk test in SAS. The only issue is, without capturing ODS, I do not see how to get the p-value for the Shapiro-Wilk test, so if you want to modify the number of observations in each simulation, you must adjust the critical value for the test statistic W. Also notice there are no macro loops: the simulation is simpler and faster using a BY statement.data normal; length simulation 4 i 3; /* save space and time */ do simulation = 1 to 10000; do i = 1 to 100; x = rand('normal'); output; end; end; run; proc univariate data=normal noprint ; by simulation; var x; output out=univariate n=n mean=mean std=std NormalTest=NormalTest; run; data univariate; set univariate; /* SAS outputs the W statistic, but it will not give the p-value. */ /* For n=100 and alpha=0.05, W is about 0.9476 */ type_one_error = normaltest < 0.9746; run; /* Summarize the type I error rates for this simulation */ proc freq data=univariate; table type_one_error/nocum; run;
In my SAS simulation the type I error rate was 5.21%.
Tested with R 3.0.2 and SAS 9.3 on Windows 7.