Site icon R-bloggers

What is nearly-isotonic regression?

[This article was first published on R – Statistical Odds & Ends, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Let’s say we have data such that . (We assume no ties among the ‘s for simplicity.) Isotonic regression gives us a monotonic fit for the ‘s by solving the problem

(See this previous post for more details.) Nearly-isotonic regression, introduced by Tibshirani et al. (2009) (Reference 1), generalizes isotonic regression by solving the problem

where and is a user-specified hyperparameter.

It turns out that, due to properties of the optimization problem, the nearly-isotonic regression fit can be computed for all values in time, making it a practical method for use. See Section 3 and Algorithm 1 of Reference 1 for details. (More accurately, we can determine the nearly-isotonic regression fit for a critical set of values: the fit for any other other value will be a linear interpolation of fits from this critical set.)

How is nearly-isotonic regression a generalization of isotonic regression? The term is positive if and only if , that is, if there is a monotonicity violation. The larger the violation, the larger the penalty. Instead of insisting on no violations at all, nearly-isotonic regression trades off the size of the violation with the improvement one gets from goodness of fit to the data. Nearly-isotonic regression gives us a series of fits that range from interpolation of the data (when ) to the isotonic regression fit (when ). (Actually, you will get the isotonic regression fit once is big enough such that any change in the penalty cannot be mitigated by the goodness of fit improvement.)

Why might you want to use nearly-isotonic regression? One possible reason is to check if the assumption monotonicity is reasonable for your data. To do so, run nearly-isotonic regression with cross-validation on and compute the CV error for each value. If the CV error achieved by the isotonic regression fit (i.e. largest value) is close to or statistically indistinguishable from the minimum, that gives assurance that monotonicity is a reasonable assumption for your data.

You can perform nearly-isotonic regression in R with the neariso package. The neariso() function returns fits for an entire path of values. The animation below shows how the fit changes as gets larger and larger (code available here).

Note 1: The formulation for nearly-isotonic regression above assumes that the points are equally spaced. If they are not, one should replace the penalty with

to account for the different-sized gaps. The neariso package only seems to handle the case where the ‘s are equally spaced.

Note 2: The animation above was created by generating separate .png files for each value of , then stitching them together using the magick package. My initial hope was to create an animation that would transition smoothly between the different fits using the gganimate package but the transitions weren’t as smooth as I would have imagined them to be:

Does anyone know how this issue could be fixed? Code for the animation is below, full code available here.

p <- ggplot(df, aes(x = x, y = beta)) +
    geom_path(col = "blue") +
    geom_point(data = truth_df, aes(x = x, y = y), shape = 4) +
    labs(title = "Nearly isotonic regression fits",
         subtitle = paste("Lambda = ", "{lambda[as.integer(closest_state)]}")) +
    transition_states(iter, transition_length = 1, state_length = 2) +
    theme_bw() + 
    theme(plot.title = element_text(size = rel(1.5), face = "bold"))
animate(p, fps = 5)

References:

  1. Tibshirani, R. J., Hoefling, H., and Tibshirani, R. (2011). Nearly-isotonic regression.

To leave a comment for the author, please follow the link and comment on their blog: R – Statistical Odds & Ends.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.