[This article was first published on R – Statistical Odds & Ends, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

The Tukey loss function

The Tukey loss function, also known as Tukey’s biweight function, is a loss function that is used in robust statistics. Tukey’s loss is similar to Huber loss in that it demonstrates quadratic behavior near the origin. However, it is even more insensitive to outliers because the loss incurred by large residuals is constant, rather than scaling linearly as it would for the Huber loss.

The loss function is defined by the formula

\begin{aligned} \ell (r) = \begin{cases} \frac{c^2}{6} \left(1 - \left[ 1 - \left( \frac{r}{c}\right)^2 \right]^3 \right) &\text{if } |r| \leq c, \\ \frac{c^2}{6} &\text{otherwise}. \end{cases} \end{aligned}

In the above, I use $r$ as the argument to the function to represent “residual”, while $c$ is a positive parameter that the user has to choose. A common choice of this parameter is $c = 4.685$: Reference 1 notes that this value results in approximately 95% asymptotic statistical efficiency as ordinary least squares when the true errors have the standard normal distribution.

You may be wondering why the loss function has a somewhat unusual constant of $c^2 / 6$ out front: it’s because this results in a nicer expression for the derivative of the loss function:

\begin{aligned} \ell' (r) = \begin{cases} r \left[ 1 - \left( \frac{r}{c}\right)^2 \right]^2 &\text{if } |r| \leq c, \\ 0 &\text{otherwise}. \end{cases} \end{aligned}

In the field of robust statistics, the derivative of the loss function is often of more interest than the loss function itself. In this field, it is common to denote the loss function and its derivative by the symbols $\rho$ and $\psi$ respectively.

Plots of the Tukey loss function

Here is R code that computes the Tukey loss and its derivative:

tukey_loss <- function(r, c) {
ifelse(abs(r) <= c,
c^2 / 6 * (1 - (1 - (r / c)^2)^3),
c^2 / 6)
}

tukey_loss_derivative <- function(r, c) {
ifelse(abs(r) <= c,
r * (1 - (r / c)^2)^2,
0)
}


Here are plots of the loss function and its derivative for a few values of the $c$ parameter:

r <- seq(-6, 6, length.out = 301)
c <- 1:3

# plot of tukey loss
library(ggplot2)
theme_set(theme_bw())
loss_df <- data.frame(
r = rep(r, times = length(c)),
loss = unlist(lapply(c, function(x) tukey_loss(r, x))),
c = rep(c, each = length(r))
)

ggplot(loss_df, aes(x = r, y = loss, col = factor(c))) +
geom_line() +
labs(title = "Plot of Tukey loss", y = "Tukey loss",
col = "c") +
theme(legend.position = "bottom")


# plot of tukey loss derivative
loss_deriv_df <- data.frame(
r = rep(r, times = length(c)),
loss_deriv = unlist(lapply(c, function(x) tukey_loss_derivative(r, x))),
c = rep(c, each = length(r))
)

ggplot(loss_deriv_df, aes(x = r, y = loss_deriv, col = factor(c))) +
geom_line() +
labs(title = "Plot of derivative of Tukey loss", y = "Derivative of Tukey loss",
col = "c") +
theme(legend.position = "bottom")


Some history

According to what I could find, Tukey’s loss was proposed by Beaton & Tukey (1974) (Reference 2), but not in the form I presented above. Rather, they proposed weights to be used in an iterative reweighted least squares (IRLS) procedure.

References:

1. Belagiannis, V., et al. (2015). Robust Optimization for Deep Regression.
2. Beaton, A. E., and Tukey, J. W. (1974). The Fitting of Power Series, Meaning Polynomials, Illustrated on Band-Spectroscopic Data.