It's a "given" that your empirical results should be able to be replicated by others. That's why more and more journals are encouraging or requiring that authors of such papers "deposit" their data and code with the journal as a condition of acceptance for publication.
That's all well and good. However, replicating someone's results using their data and code may not mean very much!
This point was highlighted in a guest post today on the Political Science Replication blog site. The piece, by Mark Bell and Nicholas Miller, is titled "How to Persuade Journals to Accept Your Replication Paper". You should all read it!
Especially if you favour the STATA package!
Here are a few excerpts, to whet your appetites:
"We were easily able to replicate Rauchhaus’ key findings in Stata, but couldn’t get it to work in R. It took us a long while to work out why, but the reason turned out to be an error in Stata: Stata was finding a solution when it shouldn’t have (because of separation in the data). This solution, as we show in the paper, was wrong – and led Rauchhaus’ paper to overestimate the effect of nuclear weapons on conflict by a factor of several million.
It’s very easy when you’re working through someone else’s code to be satisfied when you’ve successfully got your computer to produce the same numbers you see in the paper. But replicating a published result in one software package does not mean that you necessarily understand what that software package is doing, that the software is doing it correctly, or that doing it at all is the appropriate thing to do – in our case none of those were true and working out why was key to writing our paper."
"Learning new methods as you conduct the replication ensures that even if you don’t end up publishing your replication paper, you’ll have learnt new skills that will be valuable down the road."(The emphasis in red is mine.)
By the way - Brad and Nicholas are both grad. students. More power to them!
© 2013, David E. Giles