A Work of Art: Efron on Bayesian Inference

[This article was first published on Revolutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

(Contributing blogger Joseph Rickert reports from the Stanford University Statistics Seminar series – ed.)

Stanford University is very gracious about letting the general public attend many university events. Yesterday, it caught my eye that Bradley Efron was going to speak on Bayesian inference and the parametric bootstrap at the weekly Statistics seminar. So, since the free shuttle that goes to the Stanford quad practically stops at Revolution's front door, I got my self down there to find a standing room only crowd of Stanford faculty and students.  Rob Tibshirani, a student of Efron's, did his best to give Efron a hard time in a humorous introduction, but he didn't stand a chance against Efron's quick, dry wit.

Exploring the relationship between Frequentist and Bayesian thinking has been one of Efron's lifelong grand themes. In this talk, he used an early paper of Fisher's and an under appreciated paper from Newton and Raftery to show how importance sampling is a computationally efficient alternative to MCMC for certain classes of problems and to explore the link between Jeffrey's priors and frequentist estimates. Efron's presentation was a masterpiece. His talk was tight, meticulously prepared and delivered with an effortless grace that facilitated the illusion that even the most dense among us could follow the details. It was like having the company of a master painter on a leisurely Sunday visit to the museum: here expounding theory and there telling an anecdote about a the painter, or discussing some fine point of technique.

One goal of this talk was to demonstrate how one could go about estimating the frequentist properties of Bayesian estimates. Towards the conclusion, Efron remarked that if you have a real prior, even if its only in your head, then your analysis stands on its own, but if you are going to use an uninformative prior then you ought to check your results with frequentist methods.

For the R enthusiasts in the crowd a small surprise came on slide 22. When Efron got to the first line of this slide he paused to remark on the mixed notation, and pointed out that two of the inventors of the new notation were in attendance (Chambers and Hastie). I have been saying for some time now that the R language facilitates statistical thought. Now, I have some evidence.

 

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)