What happens if we forget a trivial assumption ?

[This article was first published on Freakonometrics » R-english, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Last week, @dmonniaux published an interesting post entitled l’erreur n’a rien d’original  on  his blog. He was asking the following question : let http://latex.codecogs.com/gif.latex?a, http://latex.codecogs.com/gif.latex?b and http://latex.codecogs.com/gif.latex?c denote three real-valued coefficients, under which assumption on those three coefficients does http://latex.codecogs.com/gif.latex?ax^2+bx+c has a real-valued root ?

Everyone aswered http://latex.codecogs.com/gif.latex?b^2-4acgeq%200, but no one mentioned that it is necessary to have a proper quadratic equation, first. For instance, if both http://latex.codecogs.com/gif.latex?a and http://latex.codecogs.com/gif.latex?b are null, there are no roots.

It reminds me all my time series courses, when I define http://latex.codecogs.com/gif.latex?ARMA(p,q) processes, i.e.


To have a proper http://latex.codecogs.com/gif.latex?ARMA(p,q) process, http://latex.codecogs.com/gif.latex?Phi(cdot) has to be a polynomial of order http://latex.codecogs.com/gif.latex?p, and http://latex.codecogs.com/gif.latex?Theta(cdot) has to be a polynomial of order http://latex.codecogs.com/gif.latex?q. But that is not enough ! Roots of http://latex.codecogs.com/gif.latex?Phi(cdot) and http://latex.codecogs.com/gif.latex?Theta(cdot) have to be differents ! If they have one root in common then we do not deal with a http://latex.codecogs.com/gif.latex?ARMA(p,q) process.

It sounds like something trivial, but most of the time, everyone forgets about it. Just like the assumption that http://latex.codecogs.com/gif.latex?a and http://latex.codecogs.com/gif.latex?b should be non-null in @dmonniaux‘s problem.

And most of the time, those theoretical problems are extremely important in practice ! I mean, assume that you have an http://latex.codecogs.com/gif.latex?AR(1) time series,


but you don’t know it is an http://latex.codecogs.com/gif.latex?AR(1), and you fit an http://latex.codecogs.com/gif.latex?ARMA(2,1),


Most of the time, we do not look at the roots of the polynomials, we just mention the coefficients of the polynomials,


The statistical interpreration is that the model is mispecified, and we have a non-identifiable parameter here. Is our inference procedure clever enough to understand that http://latex.codecogs.com/gif.latex?theta should be null ? What kind of coefficients http://latex.codecogs.com/gif.latex?phi_1 and http://latex.codecogs.com/gif.latex?phi_2 do we get ? Is the first one close to http://latex.codecogs.com/gif.latex?phi and the second one close to http://latex.codecogs.com/gif.latex?0 ? Because that is the true model, somehow….

Let us run some monte carlo simulations to get some hints

> ns=1000
> fit2=matrix(NA,ns,3)
> for(s in 1:ns){
+ X=arima.sim(n = 240, list(ar=0.7,sd=1))
+ fit=try( arima(X,order=c(2,0,1))$coef[1:3] )
+ if(!inherits(fit, "try-error")) fit2[s,]=fit
+ }

If we just focus on the estimations that did run well, we get

> library(ks)
> H=diag(c(.01,.01))
> U=as.data.frame(fit2)
> U=U[!is.na(U[,1]),]
> fat=kde(U,H,xmin=c(-2.05,-1.05),xmax=c(2.05,1.05))
> z=fat$estimate
> library(RColorBrewer)
> reds=colorRampPalette(brewer.pal(9,"Reds"))(100)
> image(seq(-2.05,2.05,length=151),
+ seq(-1.05,1.05,length=151),
+ z,col=reds)

The black dot is were we expect to be : http://latex.codecogs.com/gif.latex?phi_1 close to http://latex.codecogs.com/gif.latex?phi and http://latex.codecogs.com/gif.latex?phi_2 close to http://latex.codecogs.com/gif.latex?0. (the stationnarity triangle for http://latex.codecogs.com/gif.latex?ARMA(2,1) time series was added on the graph) But the numerical output is far away from what we were expecting.

So yes, the theoretical assumption to have distinct roots is very import, even if everyone forgets about it ! From a numerical point of view, we can get almost anything if we forget about that trivial assumption ! Actually, I still wonder which kind of “anything” we have… When we look at the distribution of http://latex.codecogs.com/gif.latex?theta, it is clearly not “uniform”

> hist(fit2[,3],col="light blue",probability=TRUE)

And actually, there are a priori no reason to have http://latex.codecogs.com/gif.latex?thetain[-1,1]. But that’s what we observe here

> range(fit2[!is.na(fit2[,3]),3])
[1] -1  1

To leave a comment for the author, please follow the link and comment on their blog: Freakonometrics » R-english.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)