IS vs. self-normalised IS

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

I was grading my Master projects this morning and came upon this graph:

which compares the variability of an importance-sampling estimator versus its self-normalised alternative… This is an interesting case in that self-normalisation does considerably degrade the quality of the approximation in that setting. In other cases, self-normalisation may bring a clear improvement. (This reminded me of a recent email from David Einstein complaining about imprecisions in the importance section of Monte Carlo Statistical methods , incl. the fact that self-normalisation was not truly addressing the infinite variance issue. His criticism is appropriate, we should rewrite this section towards more precise statements…)

Maybe this is to be expected. Here is a similar comparison for finite and infinite variance cases:

compar=function(df,N){
y=matrix(rt(df=df,n=N*100),nrow=100)
t=sqrt(abs(y))*dcauchy(y)/dt(y,df=df)
w=dcauchy(y)/dt(y,df=df)
tone=t(apply(t,1,cumsum)/(1:N))
wone=t(apply(t,1,cumsum)/apply(w,1,cumsum))
dim(tone)
ttwo=apply(tone,2,max)
wtwo=apply(wone,2,max)
three=apply(tone,2,min)
whree=apply(wone,2,min)
plot(apply(tone,2,mean),col="white",ylim=c(min(three),max(ttwo)))
if (diff(range(tone[,100]))<diff(range(wone[,100]))){
polygon(c(1:N,N:1),c(whree,rev(wtwo)),col="chocolate")
polygon(c(1:N,N:1),c(three,rev(ttwo)),col="wheat")}
else{
polygon(c(1:N,N:1),c(three,rev(ttwo)),col="chocolate")
polygon(c(1:N,N:1),c(whree,rev(wtwo)),col="wheat")}
}

The outcome is shown above, with an increased variability in the finite variance case (df=.5, left) and a (meaningful?) decrease in the infinite variance case (df=2.5, right).


Filed under: Books, R, Statistics, University life Tagged: importance sampling, infinite variance estimators, Monte Carlo Statistical Methods, R, self-normalised importance sampling

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)