[This article was first published on mind of a Markov chain » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Previosly, I calculated a bunch of ad-hoc power curves from GISTEMP data. Power is essentially a reframing of the p-value, to see the significance of the trend lines in the global temps. However, power calculations are inherently very noisy, hence, my ad-hoc way of aggregating the data.

Another method is to bootstrap through the responses from the linear model to arrive at a distribution of power at each start year (data is from start year to end year, which is 2009). The `simulate` function in R essentially creates data that is randomized according to the initial linear model fit to the data. I fitted the simulated data with another simple linear model, and I’ve plotted it below. The yellow dots are the original power calculations (w/ assumptions same as previous article), and the red histograms come from the simulations. As we can see, there is a lot of uncertainty in the power calculations, unsurprisingly for low power values. Power is bounded below by the Type I error rate (0.05), and even when the data shows this low power, the simulations leaped over the satisfactory mark (power of 0.8) on rare occasion. Just adding one data point changes the conclusions significantly (say from 97 to 96).

We can sort of understand why power calculations are uncertain. Given the alternative hypothesis is true, power is the probability the data is showing an extreme result for the parameter. Especially for moderate power, small changes in the data (hence, estimated parameter) covers the “heaviest” part of the alternative distribution (see this picture). This mean that the area under the curve is sensitive for moderate power. Arguably, the amount of data in which you get moderate power is most important in science.

Alternatively, we can simply look at the confidence intervals of the interested parameter. For the question, “is global warming happening now? (2009),” the latter graph gives a more robust result. One might answer, “yes, since 1999″ when looking at confidence intervals, but “yes, since 1995″ when looking at power — a significant difference.

Gerald et al. (1998) argues that retrospective power calculations (like I did here) are not useful because of its inherent instability. OTOH, prospective power calculations are useful to determine proper sample size for a study.

```library(pwr)

##   Year Annual_Mean X5_Year_Mean
## 1 1880       -0.28           NA
## 2 1881       -0.21           NA

giss.conf.boot <- function(from, to, reps) {
ind0 <- which(giss[,1] >= from & giss[,1] <= to)
giss.lm <- lm(Annual_Mean ~ Year, data = giss[ind0,])
out <- rep(0, reps)
for (i in 1:reps) {
giss.sim <- simulate(giss.lm)[,1]
if (i == 1)
giss.sim.lm <- giss.lm
else
giss.sim.lm <- lm(giss.sim ~ Year, data = giss[ind0,])
giss.pwr <- pwr.f2.test(summary(giss.sim.lm)\$f, summary(giss.sim.lm)\$f,
summary(giss.sim.lm)\$r.squared /
(1 - summary(giss.sim.lm)\$r.squared),
sig.level = 0.05)\$power
out[i] <- giss.pwr
}
out
}

## -- parametric bootstrap
from <- 2004
to <- 2009
m <- 15
reps <- 100
out <- as.list(rep(0, m))
for (i in m:1) {
out[[i]] <- giss.conf.boot(from-i, to, reps)
}
names(out) <- from - m:1

boxplot(values ~ as.numeric(ind), data = stack(out), col = "red", ylim = c(0,1), axes = F, main = "GISS Temp Data, End Year = 2009", ylab = "Power", xlab = "Start Year")
points(1:m, lapply(out, function(x) x), ylim = c(0,1), pch = 16, type = "b", col = "yellow", cex = 2)
points(1:m, lapply(out, function(x) x), ylim = c(0,1), pch = 1, type = "p", col = "black", cex = 2)
abline(h = 0.8, lty = 1)
axis(1, 1:m, from - 1:m)
axis(2, seq(0,1,by=.2))
legend("bottomright", c("from data","bootstrap"), col = c("yellow", "red"), pch = c(19,15), box.lwd = 0)

## -- now check for the confidence interval
from <- 2004
to <- 2009
m <- 15
out <- matrix(0, nr = m, nc = 3)
for (i in 1:m) {
ind0 <- which(giss[,1] >= from-i & giss[,1] <= to)
giss.lm <- lm(Annual_Mean ~ Year, data = giss[ind0,])
out[i,] <- c(giss.lm\$coeff, confint(giss.lm)[2,])
}
colnames(out) <- c("mean", "lower", "upper")

matplot(1:m, out, type = "l", col = c(1,2,2), lty = c(1,2,2), lwd = c(1,1,1), axes = F, main = "GISS Temp Data, End Year = 2009", ylab = "Slope", xlab = "Start Year")
axis(1, 1:m, from - 1:m)
axis(2, seq(-.04,.04,len=5), seq(-.04,.04,len=5))
abline(h = 0, lty = 2)
```

Refs:

1. Gerard, P., Smith, D., & Weerakkody, G. (1998). Limits of Retrospective Power Analysis The Journal of Wildlife Management, 62 (2) DOI: 10.2307/3802357