Effect estimation is an important task in modern research. An example is the identification of risk factors for disease and the qualification of medical treatments. Usually, researchers are interested in estimating the global, common effect. Since actual effects tend to differ across populations, estimates based on sample of a particular population seldomly generalize well. When different estimates of an effect are known, a summary can improve their objectiveness and performance. Recently, meta-analysis has become a popular approach for combining such estimates into a summary.
If all studies in the analysis were equally precise, a possible approach would be to compute the mean of the effect sizes. However, usually some studies are more precise than others. For this reason, it is favorable to assign more weight to the studies that carried more information. This is what happens in a meta-analysis. Rather than compute a simple mean of the effect sizes, a weighted mean is calculated, with more weight given to some studies and less weight given to others.
The question that we need to address, then, is how the weights are assigned. It turns out that this depends on what we mean by a “combined effect”. There are two models used in meta-analysis, the fixed effect model and the random effects model. The two make different assumptions about the nature of the studies, and these assumptions lead to different definitions for the combined effect, and different mechanisms for assigning weights.
Under the fixed effect model we assume that there is one true effect size which is shared by all the included studies. It follows that the combined effect is our estimate of this common effect size.
By contrast, under the random effects model we allow that the true effect could vary from study to study. For example, the effect size might be a little higher if the subjects are older, or more educated, or healthier; or if the study used a slightly more intensive or longer variant of the intervention; or if the effect was measured more reliably; and so on. The studies included in the meta-analysis are assumed to be a random sample of the relevant distribution of effects, and the combined effect estimates the mean effect in this distribution.
We illustrate both approaches using a fictional example from Borenstein where the impact of an intervention on reading scores in children is investigated.
|Study name||Effect estimate||Variance|
We can insert this evidence into R as follows:
ests = c(0.10,0.30,0.35,0.65,0.45,0.15) vars = c(0.03,0.03,0.05,0.01,0.05,0.02)
Fixed effects model
The fixed effects model assumes absence of heterogeneity. This implies that all studies are measuring the same effect. The study estimates are weighted by the inverse of their reported variances.
# Weights w = 1/vars # Combined effect est.fixef = sum(ests*w)/sum(w) # Standard error of the combined effect se.fixef = sqrt(1/sum(w)) # The Z-value z.fixef = est.fixef/se.fixef
The standardized combined effect (z-value) allows us calculating a confidence interval and p-values for the combined effect.
# p-values fe_p1t = pnorm(z.fixef,lower=F) #1-tailed p-value fe_p2t = 2*pnorm(z.fixef,lower=F) #2-tailed p-value # 95% confidence interval fe.lowerconf = est.fixef + qnorm(0.025)*se.fixef fe.upperconf = est.fixef + qnorm(0.975)*se.fixef
If we apply the methodology from above on the presented example, we obtain a combined effect of 0.40 (SE: 0.06) which was found to be significant (2-tailed p-value: 2e-10).
Random effects model