JSM 2011 [3]

[This article was first published on Xi'an's Og » R, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Monday August 01 was the first full day of JSM 2011 and full is the appropriate word to describe the day! It started for me at 7am with a round table run by Marc Suchard on parallel computing (or at 3am if I am considering the time I woke up!). I was rather out of my depth there, given that my link with parallel computing is rather formal, having worked with Pierre Jacob and Murray Smith on the valid parallelisation of Metropolis-Hastings algorithms, but it was interesting to hear of the multiplicity of available solutions and the mainstream-isation of CUDA, which now includes generators for standard distributions, thanks to Marc.

My second session was Michael Jordan’s Neyman lecture, which was well-attended despite the early hour (8:30). As usual, Michael gave a very well-articulated and broad talk. While the topic was rather close to the talks he gave in Edinburgh last year, I still got a new understanding about Bayesian non-parametrics, maybe because his Neyman talk was even more encompassing than earlier. (It also made me wonder whether or not we should incorporate some of this approach in Bayesian Core, sorry Bayesian Essentials with R, presumably not because we are aiming at a lower complexity….) A provocative introductory sentence by Michael: “I do not like priors”, maybe a tribute to Neyman?!

My third session was the one I organised (with the blessing of ISBA) on Bayesian model assessment. While Feng Liang unfortunately could not make it to JSM, Andrew Gelman and Jean-Michel Marin shared the extra-time, and Merlise Clyde gave a concluding talk that also was longer than schedule. I found it was a fantastic session with a whole range of thoughtful and provocative proposals. (I have absolutely no responsibility in the above besides inviting those speakers!) Andrew drafted a very novel picture of how Bayesian model comparison could (should?) be run, getting away from the standard paraphernalia of Bayes factors, Occam’s razor, and the like. I did not agree with the whole of his proposal, especially when he considered handling several models together with “common” parameters, but this was exciting nonetheless! Jean-Michel presented a spatial mixture model where component indicators were distributed from a Potts model and the number of components was unknown. The approximation to the posterior distribution of the number of components was based on a Chibs’ approximation. This is a complex model with an interesting solution, even though I am now waiting for the ABC comparison. Merlise concluded the session with a great summary on Bayesian model assessment, differentiating M-close from M-open cases. This was very close to my perspectives on the topic, however Merlise brought the interesting new (for me!) idea that many decision-theoretic evaluations of models would favour model averaging. One additional item that linked those three talks was that they all involved simulated pseudo-data one way or another, from posterior predictive to ABC. The session was well-attended, to the point of missing seats, especially when considering it competed with many other Bayesian sessions like the Savage Award.

Then, over lunch, I had my first meeting of CHANCE editors, which was very nice and exciting, as it seems CHANCE is heading towards a new era with a broader scope and a larger range of columns. (The distinction with Significance is becoming clearer as well.) On a personal basis, I am starting my book editing right now, which means I have to produce a review for September 1. And I will certainly call on others to increase the number and to broaden the perspectives on book reviews. Offers of service are welcome!

After lunch, it was back to parallel computing, with the JCGS papers session. Radu Craiu gave a talk on his Raptor algorithm, somehow connected to his talk in Utah last winter. This was an interesting example of adaptive MCMC, maybe the only one I will attend at JSM. In a connected way, Timothy Hanson used Polya tree construct to build a better fitted proposal in an independent Metropolis-Hastings algorithm. The examples were quite convincing, with nice movies of recovering the true target, my worry being the limitation of the method when the dimension of the parameter increases (as usual with independent proposals). The final talk of the session was about the link between GPUs and population-based MCMC, again connected to a talk I heard earlier by Chris Holmes in Valencià 9 last year. The gains brought by using the GPUs are once again staggering!

And then the day at JSM ended with the IMS presidential address, delivered by David Cox, about his views on statistical analysis. It was a brilliant, deep, foundational, and terribly impressive talk. The huge room was packed and I ended up standing in the back, which in a sense was more appropriate for the occasion. In the talk,  David Cox mentioned seven kinds of Bayesians, from subjectivists to quasi-frequentists, while he only saw two kinds of frequentists, long-term validation versus calibration… Again, a very impressive talk!


Filed under: Books, R, Statistics, Travel, University life Tagged: Bayesian Core, Bayesian model choice, JSM 2011

To leave a comment for the author, please follow the link and comment on their blog: Xi'an's Og » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)