Comparing Tree-Based Classification Methods via the Kaggle Otto Competition

April 27, 2015
By

(This article was first published on Numbr Crunch » R, and kindly contributed to R-bloggers)

In this post, I’m going to be looking at the progressive performance of different tree-based classification methods in R, using the Kaggle Otto Group Product Classification Challenge as an example. This competition challenges participants to correctly classify products into 1 of 9 classes based on data in 93 features. I’ll start with basic decision trees and move into ensemble methods – bagging, random forests, boosting.

Basic Decision Tree – tree package

Let’s start by looking at one of the most basic decision tree learners in R, the tree function in the “tree” package. This is a very simple model to implement and run.

# Clear environment workspace
rm(list=ls())
# Load data
train <- read.csv("/Users/vabraham24/Documents/RStudio/kaggle_otto/data/train.csv")
test <- read.csv("/Users/vabraham24/Documents/RStudio/kaggle_otto/data/test.csv")
samplesub <- read.csv("/Users/vabraham24/Documents/RStudio/kaggle_otto/data/sampleSubmission.csv")
# Remove id column so it doesn't get picked up by the current classifier
train <- train[,-1]
test <- test[,-1]
summary(train)
summary(test)
# Create sample train and test datasets for prototyping new models.
strain <- train[sample(nrow(train), 6000, replace=FALSE),]
stest <- train[sample(nrow(train), 2000, replace=FALSE),]

# Install tree package
install.packages('tree')
library(tree)
# Set a unique seed number so you get the same results everytime you run the below model,
# The number does not matter
set.seed(12)
# Create a decision tree model using the target field as the response and all 93 features as inputs
fit1 <- tree(as.factor(target) ~ ., data=strain)
plot(fit1)
title(main="tree")
text(fit1)
# Test the tree model on the holdout test dataset
fit1.pred <- predict(fit1, stest, type="class")
table(fit1.pred,stest$target)
fit1$error <- 1-(sum(fit1.pred==stest$target)/length(stest$target))
fit1$error

Tree_p

I was able to achieve an error rate of 0.406 with this model, which seems alright at first glance. But, you’ll immediately notice the deficiencies of this model when you look at the actual tree. The tree does not have any terminal-nodes of Class_1, Class_3, Class_4, or Class_7. That’s going to be a major problem when doing any further prediction. Not being able to predict nearly half of the possible classes is a sign of overfitting and a poor model. I think we can safely say this is not something worth using for new predictions.

Basic Decision Tree – rpart package

Let’s try another recursive partitioning tree, the rpart function from the “rpart” package. Now I don’t expect this model to work that much differently than the tree model above, but a lot of modeling is about trying different things and seeing what works.

# Install rpart package
install.packages('rpart')
library(rpart)
# set a unique seed number so you get the same results everytime you run the below model,
# the number does not matter
set.seed(13)
fit2 <- rpart(as.factor(target) ~ ., data=train, method="class")
par(xpd=TRUE)
plot(fit2, compress=TRUE)
title(main="rpart")
text(fit2)
# Test the rpart (tree) model on the holdout test dataset
fit2.pred <- predict(fit2, stest, type="class")
table(fit2.pred,stest$target)
fit2$error <- 1-(sum(fit2.pred==stest$target)/length(stest$target))
fit2$error

rpart_p

This model achieved the exact same error rate as the tree model (0.406). Looking at the plot of the rpart decision tree, it is slightly different in how it partitions from the tree decision tree but the terminal-nodes are still the same. This model also does not have any nodes that end in Class_1, Class_3, Class_4, or Class_7. From the last two models I can see that recursive partitioning is not ideal when working with lots of features and classification categories. Once you make a decision split, you only consider features that are in alignment with that split, this means that you can’t find signal from features that are upstream of that split. This leads to very simple models, which discard lots of useful signal. Anyway, the point of this exercise was to compare the performance of different trees. Basic decision trees have their place in that they are easy to understand, but as seen in their performance above, they overfit, can lose good signal easily, and have low practical performance.

Bagging – adabag package

Now that we understand the deficiencies plaguing the basic decision tree models, we can start to look at more complex tree-based models that aim to correct those deficiencies by growing multiple trees and aggregating them together to make better predictions. Bagging, or bootstrap aggregating, reduces the variance found in a single decision tree model by making multiple predictions for each observation and selecting the most commonly occurring response (or class in our case). Theoretically this should reduce the over-fitting found in a basic decision tree model.

# Install adabag package
install.packages('adabag')
library(adabag)
# set a unique seed number so you get the same results everytime you run the below model,
# the number does not matter
set.seed(14)
# Run the standard version of the bagging model.
ptm3 <- proc.time()
fit3 <- bagging(target ~ ., data=strain, mfinal=50)
fit3$time <- proc.time() - ptm3
# Test the baggind model on the holdout test dataset
fit3.pred <- predict(fit3, stest, newmfinal=50)
table(as.factor(fit3.pred$class),stest$target)
fit3$error
Confusion Matrix – bagging

  Class_1 Class_2 Class_3 Class_4 Class_5 Class_6 Class_7 Class_8 Class_9
Class_2 38 487 248 77 3 45 60 61 70
Class_5 0 5 5 0 73 0 1 1 0
Class_6 4 4 0 2 0 388 9 19 9
Class_8 8 18 24 5 0 19 15 181 22
Class_9 9 3 0 0 0 9 4 10 64

*Horizontal axis – actual classes, vertical axis – predicted classes

Wow, this model has an error rate of 0.741, even worse than the standard decision trees. This model did not classify any of the observations as Class_1, Class_3, Class_4, or Class_7; it also heavily leans towards classifying observations as Class_2. My intuitive guess as to why this method did not perform that well is that because it averages predictions from several trees and Class_2 is the most commonly occurring class, it became highly biased and classified many observations as Class_2.

Random Forest – randomForest package

The random forest model should be an improvement over the bagging model. Random forests also use bootstrap aggregating to make multiple predictions for each observation. The difference when compared to bagging is that at each branch split, a specific random sample of all the features is taken. Out of those features, the strongest one is chosen to perform the next split. When I reviewed the basic decision tree models above (tree and rpart), I stated that one of their weaknesses was that they could lose signal at each split because you can’t go back to using features that aren’t downstream of the current split. With random forests, that deficiency is solved for by randomly sampling out of all features at every split.

# Install randomForest package
install.packages('randomForest')
library(randomForest)
# set a unique seed number so you get the same results everytime you run the below model,
# the number does not matter
set.seed(16)
# Use the tuneRF function to determine an ideal value for the mtry parameter
mtry <- tuneRF(strain[,1:93], strain[,94], mtryStart=1, ntreeTry=50, stepFactor=2, improve=0.05,
trace=TRUE, plot=TRUE, doBest=FALSE)
# The ideal mtry value was found to be 8
# Create a random forest model using the target field as the response and all 93 features as inputs
ptm4 <- proc.time()
fit4 <- randomForest(as.factor(target) ~ ., data=strain, importance=TRUE, ntree=100, mtry=8)
fit4.time <- proc.time() - ptm4
# Create a dotchart of variable/feature importance as measured by a Random Forest
varImpPlot(fit4)
# Test the randomForest model on the holdout test dataset
fit4.pred <- predict(fit4, stest, type="response")
table(fit4.pred,stest$target)
fit4$error <- 1-(sum(fit4.pred==stest$target)/length(stest$target))
fit4$error
Confusion Matrix – random forest

  Class_1 Class_2 Class_3 Class_4 Class_5 Class_6 Class_7 Class_8 Class_9
Class_1 15 1 0 0 0 0 1 0 1
Class_2 6 454 154 46 1 8 24 7 8
Class_3 0 52 118 14 0 0 4 0 0
Class_4 0 1 2 20 0 0 1 0 0
Class_5 0 2 0 0 75 0 1 1 0
Class_6 8 2 0 3 0 430 11 16 6
Class_7 1 1 1 1 0 4 32 0 1
Class_8 14 3 2 0 0 12 12 244 13
Class_9 15 1 0 0 0 7 3 4 136

*Horizontal axis – actual classes, vertical axis – predicted classes

The performance of the random forest model turned out to be pretty good. Its error rate is at 0.238, which is the lowest we have scored so far. When looking at the confusion matrix, this model has made predictions for every class, which means that we can actually use this on a larger test dataset to make useful predictions.

Boosting – gbm package

Boosting is the last tree aggregation method I’ll review. It works by creating an initial classification tree and upon iteration, weighting the mis-classified observations as more significant before creating subsequent trees. The goal of this process is to reduce error on the poorly classified observations. There are many papers and websites that can explain this much better than me so I won’t go into further detail. Let’s run the model and look at the results.

# Install gbm package
install.packages('gbm')
library(gbm)
# set a unique seed number so you get the same results everytime you run the below model,
# the number does not matter
set.seed(17)
ptm5 <- proc.time()
fit5 <- gbm(target ~ ., data=strain, distribution="multinomial", n.trees=2000, cv.folds=2)
fit5.time <- proc.time() - ptm5
trees <- gbm.perf(fit5)
# Test the gbm model on the holdout test dataset
fit5.stest <- predict(fit5, stest, n.trees=trees, type="response")
fit5.stest <- as.data.frame(fit5.stest)
names(fit5.stest) <- c("Class_1","Class_2","Class_3","Class_4","Class_5","Class_6","Class_7","Class_8","Class_9")
fit5.stest.pred <- rep(NA,2000)
for (i in 1:nrow(stest)) {
 fit5.stest.pred[i] <- colnames(fit5.stest)[(which.max(fit5.stest[i,]))]}
fit5.pred <- as.factor(fit5.stest.pred)
table(fit5.pred,stest$target)
fit5.pred <- as.character(fit5.pred)
fit5$error <- 1-(sum(fit5.pred==stest$target)/length(stest$target))
fit5$error
Confusion Matrix – boosting

  Class_1 Class_2 Class_3 Class_4 Class_5 Class_6 Class_7 Class_8 Class_9
Class_1 0 0 0 0 0 1 1 1 0
Class_2 31 467 224 80 3 36 42 48 46
Class_3 3 38 42 1 0 3 9 2 0
Class_5 0 4 5 0 73 0 1 1 0
Class_6 7 3 1 1 0 394 12 15 9
Class_7 0 1 2 2 0 6 16 0 1
Class_8 7 3 3 0 0 12 8 199 17
Class_9 11 1 0 0 0 9 0 6 92

*Horizontal axis – actual classes, vertical axis – predicted classes

The overall error from this method was 0.359, an improvement over everything except the random forest model. One unfortunate thing about this model is that it didn’t predict any observations as Class_4. While Class_4 observations only make up 4% of the entire train dataset, not predicting any observations in that class means that the model can definitely be improved. Maybe by using the entire dataset and doing k-fold cross-validation, we could create a new boosting model that would be better suited for prediction in this competition.

Conclusion

I expected that the basic tree models probably wouldn’t perform that well. I had no idea that some of the ensemble methods would also perform so poorly. The clear winner from the data above is the random forest model. Even the boosting model in its current state can’t really be used because it doesn’t make any Class_4 predictions.

There are many factors, such as data size and type, which have lots of bearing on model performance. It’s not always the case that random forests perform so well relative to other ensemble decision tree classifiers. Personally I’ve had lots of luck with boosting models in the past. The packages I used are some of the most popular for bagging, random forests, and boosting, but there are several others out there that are probably some variation or combination of what we used above and will attain different results. I also didn’t spend much time on tweaking model parameters to achieve any gains, which is another way to improve some of the models above. If you’re interested in this competition, there’s still 3 weeks left and I would definitely encourage you to get involved. All of the above code is combined into a single script in my github for reference.

To leave a comment for the author, please follow the link and comment on their blog: Numbr Crunch » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Comments are closed.

Search R-bloggers


Sponsors

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)