As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when you change something, it is necessary to change it.”
At the same time, in my capacity as a social scientist, I’ve published many applied research papers, almost none of which have used experimental data.
In the present article, I’ll address the following questions:
1. Why do I agree with the consensus characterization of randomized experimentation as a gold standard?
2. Given point 1 above, why does almost all my research use observational data?
In confronting these issues, we must consider some general issues in the strategy of social science research. We also take from the psychology methods literature a more nuanced perspective that considers several different aspects of research design and goes beyond the simple division into randomized experiments, observational studies, and formal theory.
Here’s the full article, which is appearing in a volume, Field Experiments and Their Critics, edited by Dawn Teele.
It was fun to write a whole article on causal inference in social science without duplicating the article that I’d recently written for the American Journal of Sociology. But I think it came out pretty well. Actually, it contains the material for several blog entries had I chosen to present it that way. In any case, I think points 1 and 2 are central to any consideration of causal inference in applied statistics.