[This article was first published on mlr-org, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Hyperparameter tuning with mlr is rich in options as they are multiple tuning methods:
Tuning can be done in one line relying on the defaults.
The default will automatically minimize the missclassification rate.
We can find out what hyperopt did by inspecting the res object.
Depending on the parameter space mlrHyperopt will automatically decide for a suitable tuning method:
As the search space defined in the ParamSet is only numeric, sequential Bayesian optimization was chosen.
We can look into the evaluated parameter configurations and we can visualize the optimization run.
The upper left plot shows the distribution of the tried settings in the search space and contour lines indicate where regions of good configurations are located.
The lower right plot shows the value of the objective (the miss-classification rate) and how it decreases over the time.
This also shows nicely that wrong settings can lead to bad results.
Using the mlrHyperopt API with mlr
If you just want to use mlrHyperopt to access the default parameter search spaces from the
Often you don’t want to rely on the default procedures of mlrHyperopt and just incorporate it into your mlr-workflow.
Here is one example how you can use the default search spaces for an easy benchmark:
As we can see we were able to improve the performance of xgboost and the nnet without any additional knowledge on what parameters we should tune.
Especially for nnet improved performance is noticable.
Some recommended additional reads
Vignette on getting started and also how to contribute by uploading alternative or additional ParConfigs.