Real Plug-and-Play Supervised Learning AutoML using R and lares

[This article was first published on R Programming – DataScience+, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Category

Tags

The lares package has multiple families of functions to help the analyst or data scientist achieve quality robust analysis without the need of much coding. One of the most complex but valuable functions we have is h2o_automl, which semi-automatically runs the whole pipeline of a Machine Learning model given a dataset and some customizable parameters. AutoML enables you to train high-quality models specific to your needs and accelerate the research and development process.

HELP: Before getting to the code, I recommend checking h2o_automl's full documentation here or within your R session by running ?lares::h2o_automl. In it you'll find a brief description of all the parameters you can set into the function to get exactly what you need and control how it behaves.

Pipeline

In short, these are some of the things that happen on its backend:

Mapping `h2o_automl`

1. Input a dataframe df and choose which one is the independent variable (y) you'd like to predict. You may set/change the seed argument to guarantee reproducibility of your results.

2. The function decides if it's a classification (categorical) or regression (continuous) model looking at the independent variable's (y) class and number of unique values, which can be control with the thresh parameter.

3. The dataframe will be split in two: test and train datasets. The proportion of this split can be control with the split argument. This can be replicated with the msplit() function.

4. You could also center and scale your numerical values before you continue, use the no_outliers to exclude some outliers, and/or impute missing values with MICE. If it's a classification model, the function can balance (under-sample) your training data. You can control this behavior with the balance argument. Until here, you can replicate the whole process with the model_preprocess() function.

5. Runs h2o::h2o.automl(...) to train multiple models and generate a leaderboard with the top (max_models or max_time) models trained, sorted by their performance. You can also customize some additional arguments such as nfolds for k-fold cross-validations, exclude_algos and include_algos to exclude or include some algorithms, and any other additional argument you wish to pass to the mother function.

6. The best model given the default performance metric (which can be changed with stopping_metric parameter) evaluated with cross-validation (customize it with nfolds), will be selected to continue. You can also use the function h2o_selectmodel() to select another model and recalculate/plot everything again using this alternate model.

7. Performance metrics and plots will be calculated and rendered given the test predictions and test actual values (which were NOT passed to the models as inputs to be trained with). That way, your model's performance metrics shouldn't be biased. You can replicate these calculations with the model_metrics() function.

8. A list with all the inputs, leaderboard results, best selected model, performance metrics, and plots. You can either (play) see the results on console or export them using the export_results() function.

Load the library

Now, let's (install and) load the library, the data, and dig in:

# install.packages("lares")
library(lares)

# The data we'll use is the Titanic dataset
data(dft)
df <- subset(dft, select = -c(Ticket, PassengerId, Cabin))

NOTE: I'll randomly set some parameters on each example to give visibility on some of the arguments you can set to your models. Be sure to also check all the print, warnings, and messages shown throughout the process as they may have relevant information regarding your inputs and the backend operations.

Modeling examples

Let's have a look at three specific examples: classification models (binary and multiple categories) and a regression model. Also, let's see how we can export our models and put them to work on any environment.

Classification: Binary

Let's begin with a binary (TRUE/FALSE) model to predict if each passenger Survived:

r <- h2o_automl(df, y = Survived, max_models = 1, impute = FALSE, target = "TRUE")
#> 2021-06-25 09:49:03 | Started process...
#> - INDEPENDENT VARIABLE: Survived
#> - MODEL TYPE: Classification
#> # A tibble: 2 x 5
#>   tag       n     p order  pcum
#>   <lgl> <int> <dbl> <int> <dbl>
#> 1 FALSE   549  61.6     1  61.6
#> 2 TRUE    342  38.4     2 100
#> - MISSINGS: The following variables contain missing observations: Age (19.87%). Consider using the impute parameter.
#> - CATEGORICALS: There are 3 non-numerical features. Consider using ohse() or equivalent prior to encode categorical variables.
#> >>> Splitting data: train = 0.7 & test = 0.3
#> train_size  test_size 
#>        623        268
#> - REPEATED: There were 65 repeated rows which are being suppressed from the train dataset
#> - ALGORITHMS: excluded 'StackedEnsemble', 'DeepLearning'
#> - CACHE: Previous models are not being erased. You may use 'start_clean' [clear] or 'project_name' [join]
#> - UI: You may check results using H2O Flow's interactive platform: http://localhost:54321/flow/index.html
#> >>> Iterating until 1 models or 600 seconds...
#> 
  |                                                                                                 
  |                                                                                           |   0%
  |                                                                                                 
  |===========================================================================================| 100%
#> - EUREKA: Succesfully generated 1 models
#>                           model_id       auc   logloss     aucpr mean_per_class_error      rmse
#> 1 XGBoost_1_AutoML_20210625_094904 0.8567069 0.4392284 0.8310891            0.2060487 0.3718377
#>         mse
#> 1 0.1382633
#> SELECTED MODEL: XGBoost_1_AutoML_20210625_094904
#> - NOTE: The following variables were the least important: Embarked.S, Pclass.2, Embarked.C
#> >>> Running predictions for Survived...
#> Target value: TRUE
#> >>> Generating plots...
#> Model (1/1): XGBoost_1_AutoML_20210625_094904
#> Independent Variable: Survived
#> Type: Classification (2 classes)
#> Algorithm: XGBOOST
#> Split: 70% training data (of 891 observations)
#> Seed: 0
#> 
#> Test metrics: 
#>    AUC = 0.87654
#>    ACC = 0.17164
#>    PRC = 0.18421
#>    TPR = 0.34314
#>    TNR = 0.066265
#> 
#> Most important variables:
#>    Sex.female (29.2%)
#>    Fare (26.0%)
#>    Age (20.5%)
#>    Pclass.3 (8.3%)
#>    Sex.male (4.1%)
#> Process duration: 7.86s

Let's take a look at the plots generated into a single dashboard:

plot(r)

plot of chunk unnamed-chunk-1

We also have several calculations for our model's performance that may come useful such as a confusion matrix, gain and lift by percentile, area under the curve (AUC), accuracy (ACC), recall or true positive rate (TPR), cross-validation metrics, exact thresholds to maximize each metric, and others:

r$metrics
#> $dictionary
#> [1] "AUC: Area Under the Curve"                                                             
#> [2] "ACC: Accuracy"                                                                         
#> [3] "PRC: Precision = Positive Predictive Value"                                            
#> [4] "TPR: Sensitivity = Recall = Hit rate = True Positive Rate"                             
#> [5] "TNR: Specificity = Selectivity = True Negative Rate"                                   
#> [6] "Logloss (Error): Logarithmic loss [Neutral classification: 0.69315]"                   
#> [7] "Gain: When best n deciles selected, what % of the real target observations are picked?"
#> [8] "Lift: When best n deciles selected, how much better than random is?"                   
#> 
#> $confusion_matrix
#>        Pred
#> Real    FALSE TRUE
#>   FALSE    11  155
#>   TRUE     67   35
#> 
#> $gain_lift
#> # A tibble: 10 x 10
#>    percentile value random target total  gain optimal   lift response score
#>    <fct>      <chr>  <dbl>  <int> <int> <dbl>   <dbl>  <dbl>    <dbl> <dbl>
#>  1 1          TRUE    10.1     25    27  24.5    26.5 143.     24.5   95.6 
#>  2 2          TRUE    20.5     25    28  49.0    53.9 139.     24.5   84.6 
#>  3 3          TRUE    30.2     19    26  67.6    79.4 124.     18.6   47.8 
#>  4 4          TRUE    40.3     12    27  79.4   100    97.1    11.8   29.5 
#>  5 5          TRUE    50        7    26  86.3   100    72.5     6.86  20.7 
#>  6 6          TRUE    60.1      4    27  90.2   100    50.1     3.92  14.3 
#>  7 7          TRUE    70.1      4    27  94.1   100    34.2     3.92   9.59
#>  8 8          TRUE    79.9      2    26  96.1   100    20.3     1.96   7.58
#>  9 9          TRUE    89.9      1    27  97.1   100     7.93    0.980  5.89
#> 10 10         TRUE   100        3    27 100     100     0       2.94   3.20
#> 
#> $metrics
#>       AUC     ACC     PRC     TPR      TNR
#> 1 0.87654 0.17164 0.18421 0.34314 0.066265
#> 
#> $cv_metrics
#> # A tibble: 20 x 8
#>    metric                    mean     sd cv_1_valid cv_2_valid cv_3_valid cv_4_valid cv_5_valid
#>    <chr>                    <dbl>  <dbl>      <dbl>      <dbl>      <dbl>      <dbl>      <dbl>
#>  1 accuracy                 0.831 0.0539      0.84       0.816      0.856      0.895      0.75 
#>  2 auc                      0.856 0.0561      0.906      0.787      0.894      0.889      0.805
#>  3 err                      0.169 0.0539      0.16       0.184      0.144      0.105      0.25 
#>  4 err_count               21     6.67       20         23         18         13         31    
#>  5 f0point5                 0.788 0.0958      0.788      0.745      0.846      0.905      0.654
#>  6 f1                       0.777 0.0764      0.821      0.676      0.827      0.847      0.716
#>  7 f2                       0.774 0.0911      0.858      0.619      0.808      0.796      0.789
#>  8 lift_top_group           2.62  0.287       2.40       3.05       2.31       2.64       2.70 
#>  9 logloss                  0.439 0.0670      0.376      0.491      0.406      0.395      0.529
#> 10 max_per_class_error      0.270 0.0924      0.192      0.415      0.204      0.234      0.308
#> 11 mcc                      0.651 0.105       0.684      0.565      0.705      0.779      0.522
#> 12 mean_per_class_accuracy  0.818 0.0512      0.846      0.757      0.849      0.870      0.770
#> 13 mean_per_class_error     0.182 0.0512      0.154      0.243      0.151      0.130      0.230
#> 14 mse                      0.138 0.0264      0.114      0.156      0.126      0.120      0.176
#> 15 pr_auc                   0.827 0.0837      0.895      0.744      0.886      0.884      0.727
#> 16 precision                0.799 0.122       0.767      0.8        0.86       0.947      0.619
#> 17 r2                       0.410 0.130       0.531      0.293      0.486      0.491      0.247
#> 18 recall                   0.776 0.116       0.885      0.585      0.796      0.766      0.848
#> 19 rmse                     0.371 0.0349      0.338      0.395      0.355      0.346      0.419
#> 20 specificity              0.861 0.112       0.808      0.929      0.901      0.974      0.692
#> 
#> $max_metrics
#>                         metric  threshold       value idx
#> 1                       max f1 0.28890845   0.7490637 224
#> 2                       max f2 0.21783681   0.8062016 252
#> 3                 max f0point5 0.64448303   0.8105023 111
#> 4                 max accuracy 0.61486661   0.8170144 117
#> 5                max precision 0.99179381   1.0000000   0
#> 6                   max recall 0.02130460   1.0000000 399
#> 7              max specificity 0.99179381   1.0000000   0
#> 8             max absolute_mcc 0.61486661   0.6115356 117
#> 9   max min_per_class_accuracy 0.33269805   0.7859008 207
#> 10 max mean_per_class_accuracy 0.31330019   0.7939785 214
#> 11                     max tns 0.99179381 383.0000000   0
#> 12                     max fns 0.99179381 239.0000000   0
#> 13                     max fps 0.03076078 383.0000000 398
#> 14                     max tps 0.02130460 240.0000000 399
#> 15                     max tnr 0.99179381   1.0000000   0
#> 16                     max fnr 0.99179381   0.9958333   0
#> 17                     max fpr 0.03076078   1.0000000 398
#> 18                     max tpr 0.02130460   1.0000000 399

The same goes for the plots generated for these metrics. We have the gains and response plots on test data-set, confusion matrix, and ROC curves.

r$plots$metrics
#> $gains
#> Warning: Removed 1 rows containing missing values (geom_label).

plot of chunk unnamed-chunk-3

#> 
#> $response

plot of chunk unnamed-chunk-3

#> 
#> $conf_matrix

plot of chunk unnamed-chunk-3

#> 
#> $ROC

plot of chunk unnamed-chunk-3

For all models, regardless of their type (classification or regression), you can check the importance of each variable as well:

head(r$importance)
#>     variable relative_importance scaled_importance importance
#> 1 Sex.female           205.62099         1.0000000 0.29225814
#> 2       Fare           182.91312         0.8895644 0.25998245
#> 3        Age           144.42017         0.7023610 0.20527073
#> 4   Pclass.3            58.04853         0.2823084 0.08250692
#> 5   Sex.male            29.17109         0.1418683 0.04146216
#> 6      Parch            28.74764         0.1398089 0.04086028

r$plots$importance

plot of chunk unnamed-chunk-4

Classification: Multi-Categorical

Now, let's run a multi-categorical (+2 labels) model to predict Pclass of each passenger:

r <- h2o_automl(df, Pclass, ignore = c("Fare", "Cabin"), max_time = 30, plots = FALSE)
#> 2021-06-25 09:49:36 | Started process...
#> - INDEPENDENT VARIABLE: Pclass
#> - MODEL TYPE: Classification
#> # A tibble: 3 x 5
#>   tag       n     p order  pcum
#>   <fct> <int> <dbl> <int> <dbl>
#> 1 n_3     491  55.1     1  55.1
#> 2 n_1     216  24.2     2  79.4
#> 3 n_2     184  20.6     3 100
#> - MISSINGS: The following variables contain missing observations: Age (19.87%). Consider using the impute parameter.
#> - CATEGORICALS: There are 3 non-numerical features. Consider using ohse() or equivalent prior to encode categorical variables.
#> >>> Splitting data: train = 0.7 & test = 0.3
#> train_size  test_size 
#>        623        268
#> - REPEATED: There were 65 repeated rows which are being suppressed from the train dataset
#> - ALGORITHMS: excluded 'StackedEnsemble', 'DeepLearning'
#> - CACHE: Previous models are not being erased. You may use 'start_clean' [clear] or 'project_name' [join]
#> - UI: You may check results using H2O Flow's interactive platform: http://localhost:54321/flow/index.html
#> >>> Iterating until 3 models or 30 seconds...
#> 
  |                                                                                                 
  |                                                                                           |   0%
  |                                                                                                 
  |====                                                                                       |   4%
  |                                                                                                 
  |========                                                                                   |   9%
  |                                                                                                 
  |===========================================================================================| 100%
#> - EUREKA: Succesfully generated 3 models
#>                           model_id mean_per_class_error   logloss      rmse       mse auc aucpr
#> 1 XGBoost_2_AutoML_20210625_094936            0.4764622 0.8245831 0.5384408 0.2899185 NaN   NaN
#> 2 XGBoost_1_AutoML_20210625_094936            0.4861030 0.8451478 0.5422224 0.2940051 NaN   NaN
#> 3 XGBoost_3_AutoML_20210625_094936            0.4904451 0.8522329 0.5440560 0.2959969 NaN   NaN
#> SELECTED MODEL: XGBoost_2_AutoML_20210625_094936
#> - NOTE: The following variables were the least important: Sex.male, Embarked.Q
#> >>> Running predictions for Pclass...
#> Model (1/3): XGBoost_2_AutoML_20210625_094936
#> Independent Variable: Pclass
#> Type: Classification (3 classes)
#> Algorithm: XGBOOST
#> Split: 70% training data (of 891 observations)
#> Seed: 0
#> 
#> Test metrics: 
#>    AUC = 0.78078
#>    ACC = 0.68284
#> 
#> Most important variables:
#>    Age (52.0%)
#>    Survived.FALSE (11.9%)
#>    Embarked.C (7.2%)
#>    Survived.TRUE (5.4%)
#>    Sex.female (5.4%)
#> Process duration: 9.79s

Let's take a look at the plots generated into a single dashboard:

plot(r)

plot of chunk unnamed-chunk-5

Regression

Finally, a regression model with continuous values to predict Fare payed by passenger:

r <- h2o_automl(df, y = "Fare", ignore = "Pclass", exclude_algos = NULL, quiet = TRUE)
print(r)
#> Model (1/4): StackedEnsemble_AllModels_AutoML_20210625_094950
#> Independent Variable: Fare
#> Type: Regression
#> Algorithm: STACKEDENSEMBLE
#> Split: 70% training data (of 871 observations)
#> Seed: 0
#> 
#> Test metrics: 
#>    rmse = 20.309
#>    mae = 14.244
#>    mape = 0.07304
#>    mse = 412.45
#>    rsq = 0.3169
#>    rsqa = 0.3143

Let's take a look at the plots generated into a single dashboard:

plot(r)

plot of chunk unnamed-chunk-6

Export models and results

Once you have you model trained and picked, you can export the model and it's results, so you can put it to work in a production environment (doesn't have to be R). There is a function that does all that for you: export_results(). Simply pass your h2o_automl list object into this function and that's it! You can select which formats will be exported using the which argument. Currently we support: txt, csv, rds, binary, mojo [best format for production], and plots. There are also 2 quick options (dev and production) to export some or all the files. Lastly, you can set a custom subdir to gather everything into a new sub-directory; I'd recommend using the model's name or any other convention that helps you know which one's which.

Import and use your models

If you'd like to re-use your exported models to predict new datasets, you have several options:

  • h2o_predict_MOJO() [recommended]: This function lets the user predict using h2o's .zip file containing the MOJO files. These files are also the ones used when putting the model into production on any other environment. Also, MOJO let's you change h2o's versions without issues
  • h2o_predict_binary(): This function lets the user predict using the h2o binary file. The h2o version/build must match for it to work.
  • h2o_predict_model(): This function lets the user run predictions from a H2O Model Object same as you'd use the predict base function. Will probably only work in your current session as you must have the actual trained object to use it.

Complementary Posts

Related Post

To leave a comment for the author, please follow the link and comment on their blog: R Programming – DataScience+.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)