Previously in this series:
 Understanding the beta distribution
 Understanding empirical Bayes estimation
 Understanding credible intervals
 Understanding the Bayesian approach to false discovery rates
 Understanding Bayesian A/B testing
 Understanding beta binomial regression
Suppose you were a scout hiring a new baseball player, and were choosing between two that have had 100 atbats each:
 A lefthanded batter who has hit 30 hits / 100 atbats
 A righthanded batter who has hit 30 hits / 100 atbats
Who would you guess was the better batter?
This seems like a silly question: they both have the same exact batting record. But what if I told you that historically, lefthanded batters are slightly better hitters than righthanded? How could you incorporate that evidence?
In the last post, we used the method of beta binomial regression to incorporate information (specifically the number of atbats a player had) into a perplayer prior distribution. We did this to correct a bias of the algorithm, but we could do a lot with this method: in particular, we can include other factors that might change our prior expectations of a player.
These are particular applications of Bayesian hierarchical modeling, where the priors for each player are not fixed, but rather depend on other latent variables. In our empirical Bayesian approach to hierarchical modeling, we’ll estimate this prior using beta binomial regression, and then apply it to each batter. This strategy is useful in many applications beyond baseball for example, if I were analyzing ad clickthrough rates on a website, I may notice that different countries have different clickthrough rates, and therefore fit different priors for each. This could influence our Bayesian A/B tests, credible intervals, and more.
Setup
As usual, I’ll start with some code you can use to catch up if you want to follow along in R. If you want to understand what it does in more depth, check out the previous posts in this series. (As always, all the code in this post can be found here).
Based on our last post, we perform beta binomial regression using the gamlss package. This fits a model that allows the mean batting average to depend on the number of atbats a player has had.
The prior and can then be computed for each player based on and a dispersion parameter :
Now we’ve corrected for one confounding factor, . One important aspect of this prediction is that it won’t be useful when we’ve just hired a “rookie” player, and we’re wondering what his batting average will be. This observed variable is based on a player’s entire career, such that a low number is evidence that a player didn’t have much of a chance to bat. (If we wanted to make a prediction, we’d have to consider the distribution of possible ’s the player could end up with and integrate over that, which is beyond the scope of this post).
But there’s some information we can use even at the start of a player’s career. Part of the philosophy of the Bayesian approach is to bring our prior expectations in mathematically. Let’s try doing that with some factors that influence batting success.
Right and left handed batters
It’s well known in sabermetrics that lefthanded batters tend to bat slightly better. (In fact, the general belief is that lefthanded batters have an advantage against righthanded pitchers, but since most pitchers historically have been righthanded this evens out to an advantage). The Lahman dataset provides that information in the bats
column: in the above code, I retained it as part of the career
dataset.
These letters represent “Both” (switch hitters), “Left”, and “Right”, respectively. One interesting feature is that while the ratio of righties to lefties is about 9to1 in the general population, in professional baseball it is only 2to1. Managers like to hire lefthanded batters in itself, this is some evidence of a lefthanded advantage! We also see that there are a number of batters (mostly from earlier in the game’s history) that we don’t have handedness information for. We’ll filter them out of this analysis.
Incorporating this as a predictor is as simple as adding bats
to the formula in the gamlss
call (our betabinomial regression):
We can then look at the coefficients:
According to our betabinomial regression, there is indeed a statistically significant advantage to being lefthanded, with lefties hitting about 1% more often. This may seem like a small effect, but over the course of multiple games it could certainly make a difference. In contrast, there’s apparently no detectable advantage to being able to bat with both hands. (This surprised me does anyone know a reason this might be?)
For our empirical Bayes estimation, this means every combination of handedness and AB now has its own prior:
We can use these priors to improve our estimates of each player, by effectively giving a natural advantage to each lefthanded batter. Note that this prior can still easily be overcome by enough evidence. For example, consider our hypothetical pair of battters from the introduction, where each has a 30% success rate, but where one is lefthanded and one righthanded. If the batters had few atbats, we’d guess that the lefthanded batter was better, but the posterior for the two will converge as AB increases:
Over time
One of the most dramatic pieces of information we’ve “swept under the rug” in our analysis is the time period when each player was active. It’s absurd to expect that players in the 1880s would have the same ranges of batting averages as players today, and we should take that into account in our estimates. I thus included year = mean(yearID)
in the summary of each player when I constructed this data, to summarize the time period of each player using the midpoint of their career.
Could we simply fit a linear model with respect to year (~ log10(AB) + bats + year
)? Well, before we fit a model we should always look at a graph. A boxplot comparing decades is a good start (here I’m looking only at players with >= 500 AB in their career):
Well, there’s certainly a trend over time, but there’s nothing linear about it: batting averages have both risen and fallen across time. If you’re interested in baseball history and not just Bayesian statistics, you may notice that this graph marks the “power struggle” between offense and defense:
 The rise in the 1920s and 1930s marks the end of the deadball era, where hitting, especially home runs, became a more important part of the game
 The batting average “cools off” as pitchers adjust their technique, especially when the range of the strike zone was increased in 1961
 Batting average rose again in the 1970s thanks to the designated hitter rule, where pitchers (in one of the two main leagues) were no longer required to bat
 It looks like batting averages may again be drifting downward
In any case, we certainly can’t fit a simple linear trend here. We could instead fit a natural cubic spline using the ns function:^{1}
(If you’re not familiar with splines, don’t worry about them what’s important is that even in a linear model, we can include nonlinear trends)
We now have a prior for each year, handedness, and number of atbats. For example, here’s the distributions for a hypothetical player with AB = 1000:
Note that those intervals don’t represent uncertainty about our trend: they represent the 95% range in prior batting averages. Each combination of year and left/right handedness is a beta distribution, of which we’re seeing just one crosssection.
One of the implicit assumptions of the above model is that the effect of lefthandedness hasn’t changed over time. But this may not be true! We can change the formula to allow an interaction term ns(year, 5) * bats
, which lets the effect of handedness change over time:
The priors now look like:
Interesting we can now see that the gap between lefthanded and righthanded batters has been closing since the start of the game, such that today the gap has basically completely disappeared. This suggests that managers and coaches may have learned how to deal with lefthanded batters. Inspired by this, I might wonder if the percentage of games started by lefthanded pitchers has been going up over time, and I notice that it has:
This is one thing I like about fitting hierarchical models like these they don’t just improve your estimation, they can also give you insights into your data.
Let’s go back to those two batters with a record of 30 hits out of 100 atbats. We’ve now seen that this would be a different question in different years. Let’s consider what it would look like in three different years, each 50 years apart:
How do these posterior distributions (the alpha1
and beta1
we chose) differ?
If this comparison had happened in 1915, you may have wanted to pick the lefthanded batter. We wouldn’t have been sure he was better (we’d need to apply one of these methods for that), but it was more likely than not. But today, there’d be basically no reason to: left versus right handedness has almost no extra information.
Note: Uncertainty in hyperparameters
We’ve followed the philosophy of empirical Bayes so far: we fit hyperparameters (, , or our coefficients for time and handedness) for our model using maximum likelihood (e.g. betabinomial regression), and then use that as the prior for each of our observations.
Empirical Bayes in a nutshell: Estimate priors like a frequentist then carry out a Bayesian analysis.
— Data Science Fact (@DataSciFact) June 10, 2016
There’s a problem I’ve been ignoring so far with the empirical Bayesian approach, which is that there’s uncertainty in these hyperparameters as well. When we come up with an alpha and beta, or come up with particular coefficients over time, we are treating those as fixed knowledge, as if these are the priors we “entered” the experiment with. But in fact each of these parameters were chosen from this same data, and in fact each comes with a confidence interval that we’re entirely ignoring. This is sometimes called the “doubledipping” probably among critics of empirical Bayes.
This wasn’t a big deal when we were estimating just and for the overall dataset: we had so much data, and were estimating so few parameters, that we could feel good about the approach. But now that we’re fitting this many parameters, we’re pushing it. Actually quantifying this, and choosing methods robust to the charge of “doubledipping”, involves Bayesian methods outside the scope of this series. But I wanted to note that this post reaches the edge of what empirical Bayes can be used for.
What’s Next: Mixture models
We’ve been treating our overall distribution of batting averages as a beta distribution. But what if that weren’t a good fit? For example, what if we had a multimodal distribution?
We’ve been filtering out pitchers during this analysis. But when we leave them in, the data looks a lot less like a beta. The dashed density curve represents the beta distribution we would naively fit to this data.
We can see that batting averages with pitchers incldued isn’t made up of a single beta distribution it’s more like two separate ones mixed together. Imagine that you didn’t know which players were pitchers,^{2} and you wanted to separate the data into two groups according to your best prediction. This is very common in practical machine learning applications, such as clustering and segmentation.
In my next post I’m going to introduce mixture models, treating the distribution of batting averages as a mixture of two betabinomial distributions and estimating which player belongs to which group. This will also introduce the concept of an expectationmaximization algorithm, which is important in both Bayesian and frequentist statistics. We’ll see that mixture models are still a good fit for the empirical Bayes framework, and show how to calculate a posterior probability for the cluster each player belongs to.
Footnotes

Why 5 degrees of freedom? Not very scientific I just tried a few and picked one that roughly captured the shapes we saw in the boxplots. If you have too few degrees of freedom you can’t capture the complex trend we’re seeing here, but if you have too many you’ll overfit to noise in your data. ↩

Admittedly it’s not very realistic to assume we don’t know which players are pitchers. But this gives us a great example of fitting a mixture model, which will be an important element of this series. ↩
Rbloggers.com offers daily email updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...