Down and Dirty Forecasting: Part 1

[This article was first published on OutLie..R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

I wanted to see what I could do in a hurry using the commands found at Forecasting: Principles and Practice . I chose a simple enough data set of Wisconsin Unemployment from 1976 to the present (April 2013). I kept the last 12 months worth of data to test the accuracy of the models. The next blog post will include a multiple regression analysis. The analysis is lacking many important steps, particularly the ARIMA, but this is a down and dirty exercise.

#State Unemployment seasonally adjusted
#Using Quandl data, great little site
wi<-read.csv('', colClasses=c('Date'='Date'))
#some minor clean up
colnames(wi)<-c('date', 'rate')
#base data, 1-436, test data 437-448
wi.ts<-ts(wi.b$rate, start=c(1976, 1), frequency=12)
wi.p.ts<-ts(wi.p$rate, start=c(2012, 5), frequency=12)
#Lets test some models
mean<-meanf(wi.ts, 12)
naive<-rwf(wi.ts, 12)
s.naive<-snaive(wi.ts, 12)
drift<-rwf(wi.ts, 12, drift=T)
#linear fit
#checking for autocorrelation
res1 <- residuals(m1)
plot(res1, ylab="Residuals",xlab="Year")
Acf(res1, main="ACF of residuals")
res2 <- residuals(m2)
plot(res2, ylab="Residuals",xlab="Year")
Acf(res2, main="ACF of residuals")
#Durbin-Watson Test
dwtest(m1, alt="two.sided")
dwtest(m2, alt="two.sided")
#yep autocorrelation city! No surprize here, due to the nature of unemployment
#STL ETS Decomposition
m3<-stl(wi.ts, s.window='periodic')
m4<-ets(wi.ts, model='ZZZ')
plot(forecast(m5, h=12))
#neural networks
plot(forecast(m6, h=12))
#Testing for accuracy the first 4 models
a1<-accuracy(mean, wi.p.ts)
a2<-accuracy(naive, wi.p.ts)
a3<-accuracy(s.naive, wi.p.ts)
a4<-accuracy(drift, wi.p.ts)
a.table<-rbind(a1, a2, a3, a4)
#Creating the forecast and accuracy for the next 6 models
f1<-forecast(m1, h=12)
f2<-forecast(m2, h=12)
f3<-forecast(m3, h=12)
f4<-forecast(m4, h=12)
f5<-forecast(m5, h=12)
f6<-forecast(m6, h=12)
a5<-accuracy(f1, wi.p.ts)
a6<-accuracy(f2, wi.p.ts)
a7<-accuracy(f3, wi.p.ts)
a8<-accuracy(f4, wi.p.ts)
a9<-accuracy(f5, wi.p.ts)
a10<-accuracy(f6, wi.p.ts)
#Combining into a table with row names
a.table<-rbind(a.table, a5, a6, a7, a8, a9, a10)
row.names(a.table)<-c('Mean', 'Naive', 'S. Naive', 'Drift', 'Lm~Trend', 
'Lm~Trend+Sea', 'STL', 'ETS', 'ARIMA', 'Neuro')
#make into a data frame so the best model is first, according to MAPE

Results so far: Looks like the mean like forecasts are doing the best, the fancy models are not doing very well.

To leave a comment for the author, please follow the link and comment on their blog: OutLie..R. offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)