Don’t stop being a statistician once the analysis is done

June 30, 2011
By

(This article was first published on Statistical Modeling, Causal Inference, and Social Science, and kindly contributed to R-bloggers)

I received an email from the Royal Statistical Society asking if I wanted to submit a 400-word discussion to the article, Vignettes and health systems responsiveness in cross-country comparative analyses by Nigel Rice, Silvana Robone and Peter C. Smith.

My first thought was No, I can’t do it, I don’t know anything about health systems responsiveness etc. But then I thought, sure, I always have something to say. So I skimmed the article and was indeed motivated to write something. Here’s what I sent in:

As a frequent user of survey data, I am happy to see this work on improving the reliability and validity of subjective responses. My only comment is to recommend that the statistical sophistication that is evident in the in the design and modeling in this study be applied to the summaries as well. I have three suggestions in particular.

First, I believe Figure 1 could be better presented as a series of line plots. As it is, the heights of the purple and blue bars dominate the picture, and it takes a lot of effort to see much beyond that. More thoughtful graphics could reveal more of the data.

Second, I am unhappy with the model evaluation in Table 3 using significance tests. No model is perfect and it is hardly surprising that, with enough data, we can reject. It is practical significance that we should care about–and practical significance is rarely measured by chi-squared tests.

Finally, I am unhappy with summarizing countries by ranks. Rankings are notoriously noisy, and the problem is exacerbated by presenting numbers to meaningless tenths of a percentage point (as in Table 6).

Again, I make these comments to encourage these and other researchers to continue to think statistically, even after the model has been fit.

I do think this is a big deal in general. The model is fit and then people turn their brains off. They don’t think about how a table or graph might be read, or what its role would be in future decisions. I think it would be good if researchers could step back a bit and think more objectively about how they are summarizing their results.

To leave a comment for the author, please follow the link and comment on his blog: Statistical Modeling, Causal Inference, and Social Science.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...



If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook...

Tags:

Comments are closed.