smooth v2.0.0. What’s new

[This article was first published on R – Modern Forecasting, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Good news, everyone!

package has recently received a major update. The version on CRAN is now v2.0.0. I thought that this is a big deal, so I decided to pause for a moment and explain what has happened, and why this new version is interesting.

First of all, there is a new function,

, that implements Vector Exponential Smoothing model. This model allows estimating several series together and capture possible interactions between them. It can be especially useful if you need to forecast several similar products and can assume that smoothing parameter or initial seasonal indices are similar across all the series. Let’s say, you want to produce forecasts for several SKUs of cofvefe. You may unite the data of their sales in a vector and use one and the same smoothing parameter across the series using the parameter
. However, if you think that sales of one type of cofvefe may influence the sales of the other one, you may take this into account and set
. You can also switch between
(damping parameter). Just keep in mind that vector models can be greedy in the number of parameters and in order to use them efficiently, you my need to have large samples.

The function

currently allows constructing either additive or multiplicative models of any kind, but I don’t intend on creating mixed models. First of all, they are cumbersome, secondly, they are hard to implement, thirdly, they contradict common sense and, finally, I think that they are evil. As a result I also decided not to kill myself over rewriting the conventional multiplicative model and simplified things a lot by just working in logarithms. So, for example, VES(M,N,N) is in fact VES(A,N,N) applied to the data in logarithms. This simplification should not change things substantially, because we already assume that the errors are distributed log-normally in multiplicative models implemented in
. The results of multiplicative
applied to the same time series should be comparable.

The function currently lacks several important elements (i.e. prediction intervals and exogenous variables), but it will be improved over the time and closer to the version 2.5.0 it will be a kick-ass function. You can have a look at several examples of the usage of

in the vignettes. I will also make a separate post about this function at some point, so stay tuned!

Second, I have slightly optimised C++ code, which, as it seems, led to the increase of the speed of the functions. I observed the increase of around 25% on average. But this may depend on PC, the data and the complexity of the applied model.

Third, now there is a new parameter in forecasting functions of

. It determines the type of model that should be used for the occurrence part of the intermittent models. Currently it works only with Croston’s model, because TSB needs additional code modifications and is not as easy. This allows using, for example, multiplicative trend model for the probability update instead of just level model. So
would construct a model with time varying probability with trend.

In addition, model, estimated using

function (which has class “iss”), can be provided in
and reused by
and the other
functions. At the same time
and the others now also return the estimated occurrence part of the model as
which can then be used for different purposes. All of that gives more flexibility in the model construction and should be useful for research purposes (at minimum — for my research).

There are other cool updates (have a look), new features and a bunch of bugfixes in the version 2.0.0. From now on I intend to continue improving VES (because I need it for one of my research projects), work on TSB part of intermittent model and create a simulation function for the occurrence part of iETS models.

Finally, while you are still here can you, please, take part in a very simple survey? It will take 5 seconds of your time, but hopefully will help me make up my mind. I just want to know if the current default value for the

parameter in
functions is good for you. One of the ideas (proposed by Nikos Kourentzes) is to make
the default one, but I’m hesitant. So, please, help me decide. You can vote here:

Thanks and see you next time!

To leave a comment for the author, please follow the link and comment on their blog: R – Modern Forecasting. offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)