Site icon R-bloggers

Work Smarter and Not Harder

[This article was first published on Mango Solutions, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

by Brian Mitchell, Mango Solutions

Here at Mango we take testing very seriously and as the automated tester in the company I take it more seriously than most.

Automated testing does exactly what it says on the tin, it allows you to automate a number of test scenarios. We use specialised software to simulate mouse clicks and keyboard entry and write custom scripts that can be executed on an application numerous amounts of times.

My job is to try and come up with the most efficient way of testing our products before they go out the door to our customers. Working within the constraints of time and cost, I need to make things as easy as possible; I want to be able to push a button (maybe two if I’m feeling particularly energetic) and sit and watch the results roll in.

If the tests fail I want to know why immediately, is it a test failure or is it a genuine bug. Understanding why/how it has failed quickly is extremely important. If it’s a bug then we can raise an issue and get it into the developers hand as quickly as possible.

To achieve this I use a number of different technologies including

When I first started to carry out automated testing, I noticed a number of inefficiencies in the process. These will be described below, along with the steps I took to resolve them.

One of the first thing noticed was that it took a long time to gather result files. This was because they were being collected manually.

Manual collection of results seemed crazy – why do something manually when you can easily automate it? So the first thing I did was write a script to collect the result files from the various Jenkins jobs we have. This turned an hour long job into a few seconds; I then went a step further and used the XML data in Jenkins to sort the result files into folders Pass, Fail and Unstable.

This made finding the results a lot quicker, but I didn’t stop there.

So I decided to start using the Test Complete plugin for Jenkins, this allows you to view the results files within the Jenkins console rather than opening them up separately. This saves time as well as disk space for the results files.

 

But I still wanted a place to go where I could view all the results in one location, so I decided to start learning Python. Python allowed me to write a script that collected the XML results and pump them into an html table, so after an execution of a suite the Python script gets executed and an overview of the results are generated into a table.

Python is a high-level programming language, It was designed in such a way the code is very readable and complex tasks can be expressed in a few lines of code.

The table shows me if the test passed or failed. If a failure occurs it grabs the first error message and places it in the table. This doesn’t present me with the exact issue of failure but it gives me a good idea. Each test has a hyperlink which will take you to the full results where you can then investigate why this test is failing further

Collecting the results is one issue, another is execution. The previous system in Jenkins was set up using the build trigger system; this means all the jobs were put in order and locked to a node, which can be slow. This posed inefficiencies as I was unable to execute a single test easily; to do this I had to go into the config of a test and change all sorts of triggers, kick it off and then remember to change to back when it finishes.

I decided to completely redesign how the tests were kicked off. I wanted to have the ability to kick off all the tests against any browser and any available test server without changing anything. This was done by separating the tests into particular groups; these groups are sorted under a root group. I won’t go into the details of how this is setup in Jenkins, but with this I can either run Test 1 on its own, execute the whole of group 1 or execute from the root which will run everything all with a few button clicks.

When executing a test you can decide which server you test against and which browser or you can use the defaults.

Figure 1: Old Jenkins System

Figure 2: New Jenkins System

This makes it a lot easier to verify if issues have been resolved as you can only run the tests that originally found the issue to ensure it has been resolved.

Another one of my objectives was to reduce the time it took to run the entire suite; historically, it had taken more than 24 hours to run every test. This is too long for fast development.

So with the steps I have taken as well as increasing the test slaves from 3 to 5, the tests wait in queue – allowing the tests just to wait in queue, when a slave is free the next in line is immediately started.

This is down to around 12 hours now, so we are able to run these overnight, every night and along with the CI tests that are executed every 2 hours (they are paused when the full suite is being executed) we are pretty well covered.

My main aim walking into this job was to improve the process, to be able to run any test that’s required of me almost immediately (if the test exists of course), I also wanted to be able to get the results as quickly as possible.

I was always told as a system administrator that you shouldn’t need to work harder; you just need to work smarter. I take this piece of advice into every task I do. How do I automate as much as possible so I can concentrate on the more important tasks?

 

To leave a comment for the author, please follow the link and comment on their blog: Mango Solutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.