**Back Side Smack » R Stuff**, and kindly contributed to R-bloggers)

Here’s a post generated from my own ignorance of statistics (as opposed to just being marred by it)! In Labor Economics we walked through something called the truncated normal distribution. Truncated distributions come up a lot in the sciences because you may have some sample from a large population which is normall distributed but the sample itself is selected only from a certain range. If you have a sample of college students you shouldn’t expect them to reflect the population of 18-24 year olds simply because some 18 year olds choose not to attend college. Or if you are sampling apples at the grocery store you are looking at a non-randomly selected subset of all apples in the world because the tiny ones get turned into applesauce. The problems are (to use a graduate school phrase) non-trivial. As we will see below if you assume a truncated distribution is just a normal well behaved distribution you will incorrectly estimate the mean and underestimate the variance.

From Truncated Normal |

But I’m actually interested in a much narrower issue. There are ways around dealing with truncated distributions and they begin with estimating just how much of the distribution was cut off. If we know certain things about the original distribution and we know where the truncation point was, we can compute what the new mean and variance ought to be. What do we mean by certain things? First we want a scale-free measure of the truncation point.

Where is the actual truncation point and , are the mean and standard deviation of the normal distribution. then becomes the scaled (standardized, if you will) truncation point. Once we have a scale free truncation point we can begin to work at an estimate of how much of the distribution has been cut off. In order to do this we need to compute something called the inverse Mills ratio. The inverse Mills ratio is commonly but not exclusively associated with truncated distributions. We are going to follow Heckman and denote the inverse Mills ratio with . For our one-sided truncation:

From Truncated Normal |

Where are the pdf and CDF of the normal distribution, respectively. Compute this guy and the expected value of the truncated distribution is just one short step away. Specifically, . Easy, right? Well it would have been easy had I taken good notes. But I didn’t. You see, while the formula above hints that the pdf and CDF to be computed are functions of , I had just written and assumed I could work out the details later. Turns out the distinction is important. Remember that we created as a scale free measure of truncation so no matter what the standard deviation of the original distribution was chosen so that the standard deviation could be treated as 1. This all makes sense in the light of day but when you are playing around in R trying to test your understanding of truncated distributions you could be forgiven for skipping over this detail. I didn’t have a solid way to test my theory with one data point–R has a few canned functions for truncated distributions but because the underlying foundations are stochastic it isn’t like I could judge equality with one comparison.

So I cheated. Specifically, I bootstrapped a hundred normally distributed samples and compared the computed conditional mean to the actual conditional mean. What did this look like in practice? It’s actually kind of pretty:

From Truncated Normal |

Because I created each sample and each truncated subset of those samples I know their means. I also know the parameters of the normal distribution used to generate the sample. So I can test my theory. Are the functions supposed to use the original standard deviation or a standard deviation of 1? Lets see.

From Truncated Normal |

What we see above are all of the expected and actual means of the truncated distribution. The two distributions centered around 2 are the true means and the conditional mean computed with a pdf assuming a standard deviation of 1. The distribution lagging around 0.5 was created assuming a standard deviation of 4 for the inverse Mills ratio. We have a clear winner!

Knowing the method to calculate a truncated distribution at first seems like quite a feat. But there are immediate practical problems to estimating truncated samples in the wild. In my bootstrapping example above I already knew the parameters of the data generating process for the population. In the real world we often don’t observe all of the features of the population. Perhaps for college students versus young adults we can rely on cross sectional surveys but for other examples we have no such out. Imagine attempting to estimate a labor supply problem (a la Heckman). People might respond to wage changes by choosing to work more or less, but how do you deal with people who choose to work 0 hours? Students, retirees, spouses, hippies, they all work 0 hours, but assuming that their decision to work 0 hours represents a choice of 0 given some wage rather than a choice not to work given some wage would be the same as assuming that all the apples in the grocery store are randomly selected on the basis of size. Those people are staying out of the labor force because (in a very reductionist manner) their reservation wages are higher than the offered wage, but you have no idea what that reservation wage actually is. So you don’t know the true mean or the truncation point. There are a few methods to recover an estimate of the reservation wage and therefore compute the characteristics of the truncated distribution of earnings but they all are more complex than a few lines of code linking the parameters of a distribution to an answer. But if it were easy they wouldn’t call it baseball.

R code is below:

**leave a comment**for the author, please follow the link and comment on their blog:

**Back Side Smack » R Stuff**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...