In my last post, I wrote about how I compiled a US Social Security Agency data set into something usable in R, and mentioned some issues scaling it up to be usable for bigger datasets. I also mentioned the need for data to test out the accuracy of my estimates. First, I’ll show you how I prepped the dataset that it became more scalable (for the code that got us here, see my last post):
name_data_wavgpop_unisex = ddply(name_data, .(Name), function (x) sum(x$Rel_Pop*as.numeric(x$Year))/sum(x$Rel_Pop)) name_data_wavgpop_unisex$V1 = round(name_data_wavgpop_unisex$V1,0)
Above I’ve taken a different tactic to predicting expected year of birth according to name than I started out with in my last post. Here I’m using the relative popularity of the names in each year as weights for each year value. Multiplying them by the years, I get a weighted average of Year that gives me predicted year of birth. Then I round off the predictions to the nearest integer and continue on my way. Also, because test data doesn’t seem to come packaged with gender info, I’ve constructed the weighted averages using all relative popularity values for each name, regardless of whether or not that name has been used for both sexes (a.k.a. “Jordan”).
Now enter the test data. I’ve discovered that the easiest way of getting real names and ages off the internet is by looking for lists of victims of some horrible tragedy. The biggest such list of victims I could find was a list of 9/11 victims. It’s not exactly formatted for easy analysis, and I was too lazy to get the data programatically, so I just copy-pasted into LibreOffice Calc the names and ages from the first 4 lists on the page (all from either American Airlines, or United Airlines) for a total of 285 observations. I then extracted the first names, and then imported the first names and ages into R.
worldtrade = read.csv("world trade.csv") worldtrade.ages = sqldf("SELECT a.*, b.V1 as Year FROM [worldtrade] AS a LEFT JOIN name_data_wavgpop_unisex AS b on a.Name == b.Name") worldtrade.ages$Pred.Age = 2001 - as.numeric(worldtrade.ages$Year)
As you can see, I opted to use sqldf to append the appropriate predicted birth years for each name on the list I imported. I then got the predicted ages by subtracting each predicted birth year from 2001. Finally, let’s have a look at the resulting model fit (showing how close each predicted age was to the real age of the victim at the time of death):
As you can see, it’s not a tight prediction in the least bit. According to model fit statistics, there’s an adjusted r-squared of 14.6% and a residual standard error of 15.58 years. You can also see from the scatter plot that the prediction doesn’t become reasonably linear until about age 30 and onwards. Overall, I’d say it’s not too impressive, and I’d imagine it’s even worse for predicting who’s under 10 years old!
Well, this was fun (if only a little disappointing). That’s statistics for you – sometimes it confirms, and sometimes it humbles you. If you think you can show me a better way of using this name trending data to better predict ages than what I’ve done here, feel free to show me!