Examining the Accuracy of Fantasy Football Projections with an Interactive Scatterplot in R

[This article was first published on Fantasy Football Analytics » R | Fantasy Football Analytics, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In prior posts, we presented the accuracy of different analysts in projecting football players’ performance, finding that the average was more accurate than any individual analyst.  In this post, we present an OpenCPU app to examine the accuracy of historical fantasy football projections.  The app allows you to examine the accuracy of historical projections using different analysts, positions, seasons, league scoring settings, and types of averaging.  The app also includes an interactive scatterplot.

The App

The app is located here:

http://apps.fantasyfootballanalytics.net/projections

How to Examine Historical Accuracy of Fantasy Football Projections

  1. Click the “Accuracy Tab”.
  2. Click “Change Data Settings”.
  3. Select a previous season (so we know how projections compared to actual performance).
  4. Change the league settings to tailor the projected/actual points to your league settings.
  5. Choose the type calculation type: average (mean), weighted average, or robust average.  For more info on these calculation types, see here.
  6. Choose the analysts to include and, if you selected a weighted average, how much to weight each analyst in the average projections.
  7. Click “Load”.

Note: there are other settings you can modify, as well.  For a description of these settings, see here.

Interactive Scatterplot

The page displays two scatterplots.  The top scatterplot of projected versus actual points is from ggplot2 and displays a LOESS smoother and confidence interval, along with an estimate of the R-squared value for the linear (not LOESS) fit.

The bottom scatterplot of projected versus actual points is an interactive scatterplot.  You can select which positions to display in the legend.  Hovering over the dots, you will see how many points each player was projected to score and actually scored.  For instance, in 2014, we can see that Robert Griffin greatly under-performed expectations, whereas Demarco Murray exceeded expectations and Tom Brady fell close to expectations.

Accuracy Table

The table examines the accuracy of historical projections by position with several accuracy metrics:

  • mean error (ME): closer to zero is better (positive values mean the projections are under-estimates, negative values mean the projections are over-estimates)
  • root-mean squared error (RMSE): lower is better
  • mean absolute error (MAE): lower is better
  • mean percentage error (MPE): closer to zero is better (positive values mean the projections are under-estimates, negative values mean the projections are over-estimates)
  • mean absolute percentage error (MAPE): lower is better
  • mean absolute scaled error (MASE): lower is better
  • R-squared (RSQ): higher is better

R-squared is measure of relative fit, whereas the others are measures of absolute fit.  Note: the high percentage estimates of error (MPE and MAPE) reflect that a number of players scored very few points, which skews percentage estimates of error.

Interesting Observations

  • The average of analysts was more accurate than the individual analysts, consistent with the principle of the wisdom of the crowd.  For more info, see here.
  • The weighted average was slightly more accurate than the mean or robust average.  Note, however, that the default weights were calculated based on historical accuracy, so it remains to be seen whether these weights will apply to future projections.  If the best analysts are consistently more accurate than other analysts, the weighted average will likely continue to outperform the mean.  If, on the other hand, analysts don’t reliably outperform each other, the mean might be more accurate.
  • The weighted average explained about 60% of the variation in players’ actual performance.  That means that the projections are somewhat accurate but have much room for improvement.  Nevertheless, the projections are likely more accurate than pre-season rankings.
  • Projections were more accurate for some positions than others.  Projections were most accurate for QBs and WRs.  Projections were least accurate for Team Defenses (DST) and individual defensive players (IDP).  For more info, see here.
  • Projections over-estimated players’ performance by about 5–6 points on average across most positions (based on mean error).  It will be interesting to see if this pattern holds in future seasons.  If it does, we could account for this over-expectation in players’ projections.  In a future post, I hope to explore the types of players for whom this over-expectation occurs.

But don’t take my word for it. Test it out yourself and see what you find.  And let me know if you find something interesting!

Accuracy

The post Examining the Accuracy of Fantasy Football Projections with an Interactive Scatterplot in R appeared first on Fantasy Football Analytics.

To leave a comment for the author, please follow the link and comment on their blog: Fantasy Football Analytics » R | Fantasy Football Analytics.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)