The usual approach to testing software is to create a specific problem and see if the software gets the correct answer. Although this is very useful, there are problems with it:
- It is labor-intensive
- It almost totally neglects to test the code that throws errors
- There can be unconscious bias in the test cases created
One alternative is to create problems with random inputs. The talk I gave on this at useR!2011 was “Random input testing with R”.
There was a question at the talk concerning full coverage — basically that a fully random distribution is not going to be efficient at covering the whole space. I don’t have any particular experience with this, but here are my thoughts: If you have a space you are concerned about such that you can keep track of how many times each point in the space has been hit, then you could dynamically change the distributions of inputs to increase the chance of the lesser hit points being hit.
Past versions of the Portfolio Probe software have benefited from this technique on an experimental level. Future versions will be subjected to more thorough tests of this sort.