Things to try after useR! – Part 1: Deep Learning with H2O

[This article was first published on Blend it like a Bayesian!, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.



Annual R User Conference 2014

The useR! 2014 conference was a mind-blowing experience. Hundreds of R enthusiasts and the beautiful UCLA campus, I am really glad that I had the chance to attend! The only problem is that, after a few days of non-stop R talks, I was (and still am) completely overwhelmed with the new cool packages and ideas.

Let me start with H2O – one of the three promising projects that John Chambers highlighted during his keynote (the other two were Rcpp/Rcpp11 and RLLVM/RLLVMCompile).

What’s H2O?

“The Open Source In-Memory, Prediction Engine for Big Data Science” – that’s what Oxdata, the creator of H2O, said. Joseph Rickert’s blog post is a very good introduction of H2O so please read that if you want to find out more. I am going straight into the deep learning part.

Deep Learning in R

Deep learning tools in R are still relatively rare at the moment when compared to other popular algorithms like Random Forest and Support Vector Machines. A nice article about deep learning can be found here. Before the discovery of H2O, my deep learning coding experience was mostly in Matlab with the DeepLearnToolbox. Recently, I have started using ‘deepnet‘, ‘darch‘ as well as my own code for deep learning in R. I have even started developing a new package called ‘deepr‘ to further streamline the procedures. Now I have discovered the package ‘h2o‘, I may well shift the design focus of ‘deepr‘ to further integration with H2O instead!

But first, let’s play with the ‘h2o’ package and get familiar with it.

The H2O Experiment

The main purpose of this experiment is to get myself familiar with the ‘h2o’ package. There are quite a few machine learning algorithms that come with H2O (such as Random Forest and GBM). But I am only interested in the Deep Learning part and the H2O cluster configuration right now. So the following experiment was set up to investigate:
  1. How to set up and connect to a local H2O cluster from R.
  2. How to train a deep neural networks model.
  3. How to use the model for predictions.
  4. Out-of-bag performance of non-regularized and regularized models.
  5. How does the memory usage vary over time.

Experiment 1: 

For the first experiment, I used the Wisconsin Breast Cancer Database. It is a very small dataset (699 samples of 10 features and 1 label) so that I could carry out multiple runs to see the variation in prediction performance. The main purpose is to investigate the impact of model regularization by tuning the ‘Dropout’ parameter in the h2o.deeplearning(…) function (or basically the objectives 1 to 4 mentioned above).

Experiment 2: 

The next thing to investigate is the memory usage (objective 5). For this purpose, I chose a bigger (but still small in today’s standards) dataset MNIST Handwritten Digits Database (LeCun et al.). I would like to find out if the memory usage can be capped at a defined allowance over long period of model training process.

    Findings

    OK, enough for the background and experiment setup. Instead of writing this blog post like a boring lab report, let’s go through what I have found out so far. (If you want to find out more, all code is available here so you can modify it and try it out on your clusters.)

    Setting Up and Connecting to a H2O Cluster

    Smoooooth! – if I have to explain it in one word. Oxdata made this really easy for R users. Below is the code to start a local cluster with 1GB or 2GB memory allowance. However, if you want to start the local cluster from terminal (which is also useful if you see the messages during model training), you can do this java -Xmx1g -jar h2o.jar (see the original H2O documentation here).

    By default, H2O starts a cluster using all available threads (8 in my case). The h2o.init(…) function has no argument for limiting the number of threads yet (well, sometimes you do want to leave one thread idle for other important tasks like Facebook). But it is not really a problem.

    Loading Data

    In order to train models with the H2O engine, I need to link the datasets to the H2O cluster first. There are many ways to do it. In this case, I linked a data frame (Breast Cancer) and imported CSVs (MNIST) using the following code.


    Training a Deep Neural Network Model

    The syntax is very similar to other machine learning algorithms in R. The key differences are the inputs for x and y which you need to use the column numbers as identifiers.


    Using the Model for Prediction

    Again, the code should look very familiar to R users.


    The h2o.predict(…) function will return the predicted label with the probabilities of all possible outcomes (or numeric outputs for regression problems) – very useful if you want to train more models and build an ensemble.

    Out-of-Bag Performance (Breast Cancer Dataset)



    No surprise here. As I expected, the non-regularized model overfitted the training set and performed poorly on test set. Also as expected, the regularized models did give consistent out-of-bag performance. Of course, more tests on different datasets are needed. But this is definitely a good start for using deep learning techniques in R!

    Memory Usage (MNIST Dataset)



    This is awesome and really encouraging! In near idle mode, my laptop uses about 1GB of memory (Ubuntu 14.04). During the MNIST model training, H2O successfully kept the memory usage below the capped 2GB allowance over time with all 8 threads working like a steam train! OK, this is based on just one simple test but I already feel comfortable and confident to move on and use H2O for much bigger datasets.

    Conclusions

    OK, let’s start from the only negative point. The machine learning algorithms are limited to the ones that come with H2O. I cannot leverage the power of other available algorithms in R yet (correct me if I am wrong. I will be very happy to be proven wrong this time. Please leave a comment on this blog so everyone can see it). Therefore, in terms of model choices, it is not as handy as caret and subsemble.

    Having said that, the included algorithms (Deep Neural Networks, Random Forest, GBM, K-Means, PCA etc) are solid for most of the common data mining tasks. Discovering and experimenting with the deep learning functions in H2O really made me happy. With the superb memory management and the full integration with multi-node big data platforms, I am sure this H2O engine will become more and more popular among data scientists. I am already thinking about the  Parallella project but I will leave it until I finish my thesis.

    I can now understand why John Chambers recommended H2O. It has already become one of my essential R tools for data mining. The deep learning algorithm in H2O is very interesting, I will continue to explore and experiment with the rest of the regularization parameters such as ‘L1’, ‘L2’ and ‘Maxout’.

    Code

    As usual, code is available at my GitHub repo for this blog.

    Personal Highlight of useR! 2014

    Just a bit more on useR! During the conference week, I met so many cool R people for the very first time. You can see some of the photos by searching #user2014 and my twitter handle together. Other blog posts about the conference can be found here, herehere, herehere and here. For me, the highlight has to be this text analysis by Ajay:
    … which means I successfully made Matlab trending with R!!! 

    During the conference banquet, Jeremy Achin (from DataRobot) suggested that I might as well change my profile photo to a Python logo just to make it even more confusing! It was also very nice to speak to Matt Dowle in person and to learn about his amazing data.table journey from S to R. I have started updating some of my old code to use data.table for the heavy data wrangling tasks.

    By the way, Jeremy and the DataRobot team (a dream team of top Kaggle data scientists including Xavier who gave a talk about “10 packages to Win Kaggle Competitions“) showed me an amazing demo of their product. Do ask them for a beta account and see for yourself!!!

    There are more cool things that I am trying at the moment. I will try to blog about them in the near future. If I have to name a few right now … that will be:

    (Pheeew! So here is my first blog post related to machine learning – the very purpose of starting this blog. Not bad it finally happened after a whole year!)

    To leave a comment for the author, please follow the link and comment on their blog: Blend it like a Bayesian!.

    R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
    Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

    Never miss an update!
    Subscribe to R-bloggers to receive
    e-mails with the latest R posts.
    (You will not see this message again.)

    Click here to close (This popup will not appear again)