# Example 2014.1: "Power" for a binomial probability, plus: News!

January 14, 2014
By

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Hello, folks! I’m pleased to report that Nick and I have turned in the manuscript for the second edition of SAS and R: Data Management, Statistical Analysis, and Graphics. It should be available this summer. New material includes some of our more popular blog posts, plus reproducible analysis, RStudio, and more.

To celebrate, here’s a new example. Parenthetically, I was fortunate to be able to present my course: R Boot Camp for SAS users at Boston University last week. One attendee cornered me after the course. She said: “Ken, R looks great, but you use SAS for all your real work, don’t you?” Today’s example might help a SAS diehard to see why it might be helpful to know R.

OK, the example: A colleague contacted me with a typical “5-minute” question. She needed to write a convincing power calculation for the sensitivity– the probability that a test returns a positive result when the disease is present, for a fixed number of cases with the disease. I don’t know how well this has been explored in the peer-reviewed literature, but I suggested the following process:
1. Guess at the true underlying sensitivity
2. Name a lower bound (less than the truth) which we would like the observed CI to exclude
3. Use basic probability results to report the probability of exclusion, marginally across the unknown number of observed positive tests.

This is not actually a power calculation, of course, but it provides some information about the kinds of statements that it’s likely to be possible to make.

R

In R, this is almost trivial. We can get the probability of observing x positive tests simply, using the dbinom() function applied to a vector of numerators and the fixed denominator. Finding the confidence limits is a little trickier. Well, finding them is easy, using lapply() on binom.test(), but extracting them requires using sapply() on the results from lapply(). Then it’s trivial to generate a logical vector indicating whether the value we want to exclude is in the CI or not, and the sum of the probabilities we see a number of positive tests where we include this value is our desired result.

`> truesense = .9> exclude = .6> npos = 20> probobs = dbinom(0:npos,npos,truesense)> cis = t(sapply(lapply(0:npos,binom.test, n=npos),                function(bt) return(bt\$conf.int)))> included = cis[,1] < exclude & cis[,2] > exclude> myprob = sum(probobs*included)> myprob[1] 0.1329533`

(Note that I calculated the inclusion probability, not the exclusion probability.)

Of course, the real beauty and power of R is how simple it is to turn this into a function:

`> probinc = function(truesense, exclude, npos) {  probobs = dbinom(0:npos,npos,truesense)  cis = t(sapply(lapply(0:npos,binom.test, n=npos),                function(bt) return(bt\$conf.int)))   included = cis[,1] < exclude & cis[,2] > exclude   return(sum(probobs*included))}> probinc(.9,.6,20)[1] 0.1329533`

SAS

My SAS process took about 4 times as long to write.
I begin by making a data set with a variable recording both the number of events (positive tests) and non-events (false negatives) for each possible value. These serve as weights in the proc freq I use to generate the confidence limits.

`%let truesense = .9;%let exclude = .6;%let npos = 20;data rej;do i = 1 to &npos;  w = i; event = 1; output;  w = &npos - i; event = 0; output;  end;run;ods output binomialprop = rej2;proc freq data = rej;by i;tables event /binomial(level='1');weight w;run;`

Note that I repeat the proc freq for each number of events using the by statement. After saving the results with the ODS system, I have to use proc transpose to make a table with one row for each number of positive tests– before this, every statistic in the output has its own row.

`proc transpose data = rej2 out = rej3;where name1 eq "XL_BIN" or name1 eq "XU_BIN";by i;id name1;var nvalue1;run;`

In my fourth data set, I can find the probability of observing each number of events and multiply this with my logical test of whether the CI included my target value or not. But here there is another twist. The proc freq approach won’t generate a CI for both the situation where there are 0 positive tests and the setting where all are positive in the same run. My solution to this was to omit the case with 0 positives from my for loop above, but now I need to account for that possibility. Here I use the end=option to the set statement to figure out when I’ve reached the case with all positive (sensitivity =1). Then I can use the reflexive property to find the confidence limits for the case with 0 events. Then I’m finally ready to sum up the probabilities associated with the number of positive tests where the CI includes the target value.

`data rej4;set rej3 end = eof;prob = pdf('BINOMIAL',i,&truesense,&npos);prob_include = prob * ((xl_bin < &exclude) and (xu_bin > &exclude));output;if eof then do;   prob = pdf('BINOMIAL',0,&truesense,&npos);   prob_include = prob * (((1 - xu_bin) < &exclude) and ((1 - xl_bin) > &exclude));   output;   end;run;proc means data = rej4 sum;var prob_include;run;`

Elegance is a subjective thing, I suppose, but to my eye, the R solution is simple and graceful, while the SAS solution is rather awkward. And I didn’t even make a macro out of it yet!

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.