# Visualization of predictions

**mlr-org**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In this post I want to shortly introduce you to the great visualization possibilities of `mlr`

.

Within the last months a lot of work has been put into that field.

This post is not a tutorial but more a demonstration of how little code you have to write with `mlr`

to get some nice plots showing the prediction behaviors for different learners.

First we define a list containing all the learners we want to visualize.

Notice that most of the `mlr`

methods are able to work with just the string (i.e. `"classif.svm"`

) to know what learner you mean.

Nevertheless you can define the learner more precisely with `makeLearner()`

and set some parameters such as the `kernel`

in this example.

First we define the list of learners we want to visualize.

## Support Vector Machines

Now lets have a look at the different results and lets start with the SVM with a *linear kernel*.

We can see clearly that in fact the decision boundary is indeed linear.

Furthermore the misclassified items are highlighted and a 10-fold cross validation to obtain the mean missclassification error is executed.

For the *polynomial* and the *radial kernel* the decision boundaries already look a bit more sophisticated:

Note that the intensity of the colors also indicates the certainty of the prediction and that this example is probably a rare case where the linear kernel performs best. although this is likely only the case because we didn’t optimize the parameters for the radial kernel.

## Quadratic Discriminant Analysis

A well known classificator from the basic course of statistics delivers a similar performance as the SVMs.

## Random Forest

A completely different picture is generated by the random forest.

Here you can see that the whole data set is used to generate the model and as a result it looks like it gives a perfect fit but obviously you wouldn’t use the train data to evaluate your model.

And the results of the 10-fold cross validation indicate that the random forest is actually not better then the others.

## Nearest Neighbour

In the default setting knn just look for ‘k=1’ neighbor and as a result the classifier does not return probabilities but only the class labels.

**leave a comment**for the author, please follow the link and comment on their blog:

**mlr-org**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.