How to convert odds ratios to relative risks

January 27, 2014
By

(This article was first published on Robert Grant's stats blog » R, and kindly contributed to R-bloggers)

My short paper on this came out on Friday in the British Medical Journal. The aim is to help both authors and readers of research make sense of this rather confusing but unavoidable statistic, the odds ratio (OR). The fundamental problem is that quoting the odds in group A, divided by the odds in group B, confuses most people because we just don’t think in terms of odds.

The home-made video abstract on the BMJ website shows you the difference between odds and risk, and how one odds ratio can mean several different relative risks (RRs), depending on the risk in one of the groups. Unfortunately, in some situations, you just have to get an OR, notably logistic regression and retrospective case-control studies.

The bottom line is that authors should present RRs if they can, and with excellent software like margins and marginsplot in Stata, and effects in R, there’s really no excuse not to do this, even for complex models. In particular, I’m a huge fan of the plots of marginal probabilities from these packages, which help you to show the complex patterns in your data to an audience that will run scared from tables of ORs, interaction terms and confidence intervals. John Fox’s 2003 paper is still worth reading.

For readers, it can be harder, because you only have the information in the paper. Anyone who has done a systematic review will know what I mean – the baseline stats are given in Table 1 and the ORs in Table 2 or 3, and without any idea of the risk in one of the groups, you can proceed no further. However, I would suggest there is still hope. If you can get a range of plausible risks for the control group, you can work out a range of plausible relative risks. The formula is:

RR = OR / (1 – p + (p x OR))

where p is the risk in the control group. I’ve given a ready-reckoner table in the BMJ paper.

OR-RR conversion

And one more subtlety, if I may. As we’ve seen, a statistical model with a single shared OR for everyone (take this pill and your odds of a heart attack go down by 10%) does not imply a shared RR for everyone. If the logistic regression included adjustment for confounders, or if the case-control study was matched, then those other factors will warp the RR for different subgroups of people. This is where the plausible range comes in handy as well.

It sounds a bit Bayesian, but is really just a sensitivity analysis. However, a Bayesian meta-analysis could look at trials reporting ORs with little or no other supporting information , as well as trials reporting RRs, and combine them to get a pooled RR, if it included a shared prior for control group risks. This is one of my next projects…


To leave a comment for the author, please follow the link and comment on his blog: Robert Grant's stats blog » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.