Improved vtreat documentation

[This article was first published on R – Win-Vector Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Nina Zumel has donated some time to greatly improve the vtreat R package documentation (now available as pre-rendered HTML here).

Chrome Vanadium Adjustable Wrench

vtreat is an R data.frame processor/conditioner package that helps prepare real-world data for predictive modeling in a statistically justifiable manner.

Even with modern machine learning techniques (random forests, support vector machines, neural nets, gradient boosted trees, and so on) or standard statistical methods (regression, generalized regression, generalized additive models) there are common data issues that can cause modeling to fail. vtreat deals with a number of these in a principled and automated fashion.

In particular vtreat emphasizes a concept called “y-aware pre-processing” and implements:

  • Treatment of missing values through safe replacement plus indicator column (a simple but very powerful method when combined with downstream machine learning algorithms).
  • Treatment of novel levels (new values of categorical variable seen during test or application, but not seen during training) through sub-models (or impact/effects coding of pooled rare events).
  • Explicit coding of categorical variable levels as new indicator variables (with optional suppression of non-significant indicators).
  • Treatment of categorical variables with very large numbers of levels through sub-models (again impact/effects coding).
  • (optional) User specified significance pruning on levels coded into effects/impact sub-models.
  • Correct treatment of nested models or sub-models through data split (see here) or through the generation of “cross validated” data frames (see here; these are issues similar to what is required to build statistically efficient stacked models or super-learners).
  • Safe processing of “wide data” (data with very many variables, often driving common machine learning algorithms to over-fit) through out of sample per-variable significance estimates and user controllable pruning (something we have lectured on previously here and here).
  • Collaring/Winsorizing of unexpected out of range numeric inputs.
  • (optional) Conversion of all variables into effects (or “y-scale”) units (through the optional scale argument to vtreat::prepare(), using some of the ideas discussed here). This allows correct/sensible application of principal component analysis pre-processing in a machine learning context.
  • Joining in additional training distribution data (which can be useful in analysis, called “catP” and “catD”).

The idea is: even with a sophisticated machine learning algorithm there are many ways messy real world data can defeat the modeling process, and vtreat helps with at least ten of them. We emphasize: these problems are already in your data, you simply build better and more reliable models if you attempt to mitigate them. Automated processing is no substitute for actually looking at the data, but vtreat supplies efficient, reliable, documented, and tested implementations of many of the commonly needed transforms.

To help explain the methods we have prepared some documentation:

To leave a comment for the author, please follow the link and comment on their blog: R – Win-Vector Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)