Here you will find daily news and tutorials about R, contributed by over 750 bloggers.
There are many ways to follow us - By e-mail:On Facebook: If you are an R blogger yourself you are invited to add your own R content feed to this site (Non-English R bloggers should add themselves- here)

“We don’t need likelihood functions; we just need to know how to simulate from [them] (…) We don’t need models with sufficient statistics; we just need summary statistics (…) We don’t need to be Bayesian; we just need to be approximately so. We don’t need theory to tell us our method works; we just need to simulate and see.”

In the April/May issue of the IMS Bulletin (that I only received yesterday!), Terry Speed’s column is about simulation. He bemoans having come late to the game and missing the early days of the simulation revolution, missing MCMC and sequential Monte Carlo and particle filters (if not ABC) when he (or his students) would have benefited from those tools. This is a very nice column altogether, concluding however with a somehow over-optimistic view of simulation as the new deus ex machina. I think theory remains a necessary item in the picture, to derive methods and algorithms, to validate or invalidate models and convergence, and to make sense of the flow of information provided by simulation itself. (To pick on the above quote, our recent work on ABC model choice showed that summary statistics may critically miss to “provide information about the parameters in our model”.)