Living it up with computational errors
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
How to have a better chance of a good outcome.
Making mistakes
There’s been a lot of talk recently about data analysis problems with spreadsheets. If you’ve not stuck your head out of your cave lately, then you can catch some of the discussion by doing an internet search for:
Reinhart Rogoff
There are several points at issue, but one thing that has received a lot of airplay is a mistake in Excel. Now, I’m known in some circles as being not so keen on spreadsheets.
A lot of the criticism implies that if you use a more appropriate tool than a spreadsheet, then there won’t be any problems. Unfortunately, that isn’t the case.
As I’ve said before, the issue is not that mistakes don’t happen outside spreadsheets, it is that it is nearly impossible to eliminate mistakes in spreadsheets.
It is very easy to make mistakes in any computing environment. There is, for example, over a hundred pages of proof that mistakes are possible in R. But functions can be debugged and subjected to testing so that bugs are eliminated.
QA for data analysis
Fran Bennett in a talk at LondonR wondered how to put data analyses into a testing framework like software is. Markus Gesmann gives a partial answer in his “Test Driven Analysis?” post.
But there is a key difference between data analysis and software. When we test software we know what the answer should be. Well, sometimes we don’t really know the answer, but we will almost certainly know important characteristics about the answer.
In contrast the whole point of data analysis is that we are ignorant about the answer. Some things to do are:
- keep a record of your commands, so they can be reviewed
- check if the results are sensible
Keeping a record and checking it is easy in an environment like R. It is pretty much impossible with a spreadsheet.
Checking if your results are sensible is actually rather problematic. Often when doing data analysis we are wanting a particular result. We are studying the niceness of girls, and we’d like girls in coffee shops to be nicer. If our results show that they are nicer, we have no motivation to scrutinize the analysis. But if the results are that you don’t meet nice girls in coffee shops, then we will carefully look through the analysis for any mistakes.
This is efficient in terms of mistakes found per unit effort, but it is inefficient in terms of scientific results. Unexpected results are much more likely to be due to an error than is an expected result. But ideally we should be more motivated to disprove our pet theories than to confirm them.
QA for R
In a comment to “Interview with a forced convert from Matlab to R” Louis Scott talks about the lack of testing in R packages on CRAN. I think that is a valid and important concern. Base R is well-tested and well-controlled. But the typical use of R is a mixture of functionality from R Core and functionality from some number of CRAN packages. A user may not even be aware of all the packages on which their analysis depends.
One of the best things that could happen for R is for CRAN packages to be better tested. Tao Te Programming was written to be language independent, but contributors to CRAN were most definitely in my target audience. There are a number of suggestions in the book about testing. One is:
- make the testing status of each function apparent
One place to put this information is in a “Testing status” section of the help file.
Another issue that is discussed is that R has an unfortunate confounding of examples and testing. The examples in the help files are evaluated and used as a test of the software. A really good thing about R is that it has a culture of examples in the help files. But tests and examples have very different uses. When you confound them, you are likely to get commands that are not very good for either use.
The confounding has another down-side in that CRAN limits the time allowed for examples to run. This is a quite reasonable rule for CRAN — it deals with thousands of packages. Testing the examples should be analogous to the pprobe.verify
function in Portfolio Probe that quickly tests if all the functions are present and basic functionality is intact. That is not a replacement for the test suite, which — depending on some settings — takes hours to days to complete.
Prevention
Testing isn’t all there is.
Just because a function is bug-free doesn’t mean it is safe. The object is to have the entire process error-free. If I write something that lots of people use wrong, I’m not doing them a favor.
Consider an example from fund management. We want to get the value of a portfolio. The inputs are the number of units the portfolio holds for each asset, and the prices per unit for each asset. Here is an R function to do that:
> value function (unitsInPortfolio, pricePerUnit) { sum(unitsInPortfolio * pricePerUnit) }
Simple, easy, no bugs. The arguments are even descriptive of what they should contain. Let’s use it:
> value(c(A=100, B=250), c(A=12.63, B=17.29)) [1] 5585.5
Grand. Let’s use it again:
> value(c(B=250, A=100), c(A=12.63, B=17.29)) [1] 4886.5 > value(c(A=100, B=250), c(A=12.63, B=17.29, C=21.34, D=16.77)) [1] 11912
Not so grand. It is exceedingly easy to get the wrong answer without any indication that something is wrong.
This is another theme in Tao Te Programming.
Epilogue
They hung a sign up in our town
“if you live it up, you won’t
live it down”
The post Living it up with computational errors appeared first on Burns Statistics.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.