Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

As I am finishing up my thesis I have recently been plotting effects from many models. An important aspect of this is to show the uncertainty surrounding different estimates and effects. Following a paper by Gary King, Michael Tomz and Jason Wittenberg it has become very popular among quants in political science to simulate the uncertainty rather than relying on the delta method. They take advantage of the fact that the central limit theorem show that with a large enough sample and bounded variance it is possible to simulate parameter values by drawing from a multivariate normal distribution. The means are set equal to the estimated parameters and variance equal to the variance-covariance matrix from the model. Stata users have access to the excellent Clarify module from Gary King and colleagues which implements their approrach, however since I am doing most of my work in R this is not an option. King and friends have implemented much of the same functionality in the Zelig package for R, but I find extracting exactly what I want from objects created by Zelig to be a chore. Luckily it is very easy to implement this on your own in R. The approach advocated by King and colleagues follows a 5 step process:
1. Simulate the parameters
2. define the values at which the variables are held constant
3. Calculate the systematic component of model for each round of simulated parameters
4. Use the systematic component to calculate your quantity of interest
5. repeat step 1-4 a 1000 times, or until you have the desired degree of accuracy
Here I will show how to do this for a logit model. I first generate some fake data, and then go through the above steps. Finally I plot the effects with uncertainty using ggplot2. Based on the above R script, we get the following plot of the effect and uncertainty surrounding the effect of the variable V3: 