Bayesian Experimental Design through An Drug Study Example

[This article was first published on R in principles[0] on Medium, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Before I re-branded myself as a machine learning person, I briefly did research on some “hardcore” statistics related to in vitro drug studies. A common type of drug studies focuses on the dose-response relationship. Basically, in such a study, we are interested in the effective dosage of a drug. If the dosage is too small, the drug would not achieve its desired effects but if the dosage is too large, there would be side effects. The usual target is half maximal inhibitory concentration or IC50; it is the dosage at which 50% effect is reached. IC50 is one of the most important characteristics of any drug.

Imagine a new drug study on human cancer cells harvested from patients. The actual lab experiment involves using the drug at different dosages and measures the outcome e.g. number of cells alive. The data may look like the following, where y-axis is number of cells alive and x-axis is drug dosage in some unit. The dosages are multiples of each other as the drug solution is serially diluted. At each dosage, there are several replicates to reduce error. Given data, a “sigmoidal” curve is usually fitted to estimate IC50.

In this scenario, the experimental design is the choice of drug dosages. To simplify things, let’s say we are doing this on one plate with 24 wells. Therefore, we’d use 6 different dosages and 4 replicates per dosage. Of course, one of the dosages should be 0 as the control. The remaining 5 dosages would conveniently be 0.25X, 0.5X, X, 2X, and 4X as the concentration doubles. Now the design boils down to choosing a single value X, which is the median dosage.

Using only these 24 data points, we must be very careful in order to figure out the effective dosage of this potential cancer drug.

To recap, we are interested in IC50, which tells us how much of the drug is needed to achieve 50% effect. IC50 is roughly in the middle of the dose-response curve but it also depends on the shape of the curve. If we wisely choose the dosages in the experiment, the data points would cover the curve well and we’d accurately estimate IC50. Hence, the experimental design goal is to minimize the estimation error of IC50, or the variance of model parameters. Mathematically, the variance-covariance matrix of model parameters can be calculated.

As we can see right away, the variance is a function of not only X, the dosages we choose but also unknown model parameters: E, C, and M. Perhaps this is not at all surprising. If we knew what the curve looks like, then we’d be able to choose the dosages carefully to cover the curve. However, before doing the experiment, we have no idea what the curve looks like. How are we going to design the optimal experiment?

This is more or less where traditional/frequentist experimental design starts to fail. Often, people would make assumptions about the model parameters that describe the curve and then optimize the equations above. In literature, this approach is called local optimal design. These assumptions, when incorrect, could easily decrease the accuracy/power of the experiment. The plot below shows the variance of estimated IC50 as a function of median dosage X used in the experiment. As the design changes slightly, the variance can fluctuate dramatically.

The Bayesian approach attempts to formalize the assumptions into prior distributions on model parameters. These priors express our belief on what the dose-repsonse curve should look like before the experiment using probability. Based on the prior, we can hypothesize all kinds of data we could possibly get from the experiment. For each kind of data, we’d compute the posterior distribution that summarizes the hypothetical experimental results. More importantly, we’d compute the posterior expected utility that tells us how good the results are.

Then considering all possible results from all possible data, we can optimize the “overall” expected utility.

What I just described is both rigorous and elegant, however, the double integral above is impossible to compute directly in most non-trivial cases. This is the reason that prevents Bayesian experimental from wider use. From a research perspective, the intractable double integral makes any theoretical analysis mathematically challenging. In practice though, hard math does not prevent us from writing code to solve the problem.

For a particular experimental design, we can simulate hundreds of data sets and approximate the expected utility. Then we can do this among a wide range of designs to find the best. In the context of the drug study, suppose the IC50 value is 30 and the curve shape is perfectly sigmoidal. Our prior belief could be that we think IC50 is between 10 and 60, which has a fair amount of uncertainty. This can be expressed by a Lognormal(3.4,0.3) distribution. After millions of iterations, the Bayesian approach tells us to use dosages with median X=32. The Bayesian design would be very close to the “true” optimal design if we knew what the curve looks like!

Below is the R code that uses Stan; the for loop can actually be parallelized. The Stan model is a bit complex so it is not included here.


Bayesian Experimental Design through An Drug Study Example was originally published in principles[0] on Medium, where people are continuing the conversation by highlighting and responding to this story.

To leave a comment for the author, please follow the link and comment on their blog: R in principles[0] on Medium.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)