An Ode to Testing, my first review

[This article was first published on rOpenSci - open tools for open science, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

To give you an idea of where I am in my R developer germination, I’d just started reading about testing when I received an email from @rOpenSci inviting me to review the weathercan package. Many of us in the R community feel like imposters when it comes to software development. In fact, as a statistician, it was a surprise to me when I was recently called a developer.

In terms of formal computer science training, I took one subject in first year, with the appropriate initialism OOF. Ostensibly, this was to school me in Object Oriented Fundamentals, but mostly educated me in just how much one person can pontificate about doubles and floats. I am almost always befuddled by regexes on the rare occasions I come across them.

However, through undertaking this review, which began with the revelation that I’m not alone in thinking, “What if I have absolutely nothing to say other than, yes, this is, in fact a package?!”, I have come to see that all R users are R family (aw). No doubt these are well worn cobblestones that I judder my bicycle along. Despite this, it felt like a unique journey given my current fascination with testing.

Now I think any R user can be a Reviewer. That is, surely there’s something to be said for having someone who’s relatively uninitiated take your package for a whirl, at the very least. We all want our packages to be usable by all data science folk, not just advanced programmers.

I was delighted that the testthat package was part of the recommended reviewer workflow, as well as novel devtools functions. Reading the rOpenSci package guide was a veritable font of good tips for someone who has only just begun building their first packages.

In this context, testing refers to a reproducible and more systematic approach to the kind of ad hoc console testing we R peeps do when writing functions. Does it output expected results? For a variety of inputs? Does it fail as it should when passed something it shouldn’t be?

It all seemed straightforward when I was reading about it, but sitting down to write tests for functions I hadn’t looked at in a while was a daunting proposition. In a review, however, you’re considering other people’s functions and the questions that spring to mind are so much more obvious in an objective setting. For starters, do the tests cover all functions?

I must admit to holding out actually looking at the test code, I wanted to get a feel for the syntax first and have a try myself. So, when I did pop the hood on the tests directory, I appreciated it more. Looking at the tests written for weathercan, it is clear @steffilazerte is a good ways past reading the testing chapter for the first time.

In a previous existence, I worked as a musician for almost two decades. Everything in my life has a soundtrack, and code is no exception. Looking through @steffilazerte’s tests, I heard this. My counterpoint lecturer said this is the single greatest piece of polyphony (more than one melody at the same time).

So, I dragged my feet a bit on the review, largely because the more I read the more I had to revisit testing on my own packages. Thinking about someone else’s tests made me want to explore what it was like to write a function along with its associated tests, at the same time.

I’m half-way through my doctorate in statistics, coming through from a maths background. So, up until now, my analyses have been horrible towering pillars of R script files which sourced functions from each other.

A Towering Pillar of Hats. https://wiki.teamfortress.com/w/images/f/f5/Towering_Pillar_of_Hats.png

Sometimes I’d source functions from other files but then worry they were broken. I heard this.

A wise man (@njtierney) recently said to me that statisticians can learn a lot from the development community, and he was not wrong.

In the past, if I came back to code after a couple of months I’d be plagued by anxiety when using a function. Is this the latest iteration? In which script file did I leave the latest iteration lying around? Does every other script file call the latest version?

Writing functions with documentation and tests at the same time feels like this.

Now, not only do I know where everything is, but I can also trust that the functions work the way I intended in the documentation using testing. No longer will Current Charles be cursing Past Charles for her inscrutable code!

D.Va portrait. https://i0.wp.com/upload.wikimedia.org/wikipedia/en/5/55/D.Va_Overwatch.png?w=578&ssl=1

To leave a comment for the author, please follow the link and comment on their blog: rOpenSci - open tools for open science.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)