10w2170, Banff [2]

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Over the two days of the Hierarchical Bayesian Methods in Ecology workshop, we managed to cover normal models, testing, regression, Gibbs sampling, generalised linear models, Metropolis-Hastings algorithms and of course a fair dose of hierarchical modelling. At the end of the Saturday marathon session, we spent one and half discussing some models studied by the participants, which were obviously too complex to be solved on the spot but well-defined so that we could work on MCMC implementation and analysis. And on Sunday morning, a good example of Poisson regression proposed by Devin Goodman led to an exciting on-line programming of a random effect generalised model, with the lucky occurrence of detectable identifiability issues that we could play with… I am impressed at the resilience of the audience given the gruesome pace I pursued over those two days, covering the five first chapters of Bayesian Core, all the way to the mixtures! In retrospect, I think I need to improve my coverage of testing as the noninformative case presumably sounded messy. And unconvincing. I also fear the material on hierarchical models was not sufficiently developed. But, overall, the workshop provided a wonderful opportunity to exchange with bright PhD students from Ecology and Forestry about their models and (hierarchical) Bayesian modelling.

As an aside, I also noticed that several of the participants from the University of Alberta were (considering) using data cloning, which is a MCMC method for computing maximum likelihood estimates that was locally developed by Subhash Lele (University of Alberta). The method is based on using MCMC data augmentation methods to optimise likelihood functions by considering a larger and larger number of replicates of the missing data. Data cloning is thus essentially a user-friendly simulated annealing technique for missing data models and one of the multiple replicas of what I called the prior feedback technique in 1991. The idea of prior feedback (also exposed in our 1996 JASA paper with Gene Hwang) is that when the likelihood is raised to a high power, any prior distribution with an unrestricted support leads to a “posterior” distribution concentrated around the MLE. (I actually got this idea from reading the 1991 Read Paper by Murray Aitkin on posterior Bayes factors. The following discussion was quite critical, including Dennis Lindley’s now famous “One hardly advances the respect with which statisticians are held in society by making such declarations”!, but instead of solving the problem with improper priors in Bayesian testing it suggested to me a computational technique to compute MLE’s…)  With Arnaud Doucet and Simon Godsill, we developed in 2002 an algorithm for missing data and state-space models called SAME (for state augmentation for marginal estimation) that got published in Statistics and Computing but resurfaces periodically in the literature, the latest avatar being data cloning… Judging from a web search, the data cloning approach is getting more popular in Ecology, including an R package named dcr and developed by Peter Solymos.


Filed under: R, Statistics, University life Tagged: Banff, BIRS, data cloning, dcr, Dennis Lindley, hierarchical Bayesian modelling, prior feedback, SAME algorithm

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)