Thursday, March 15, 2007

Balancing Exploratory Testing With Scripted Testing

This is the best explanation I have ever seen that deals with the balance testers must strike between scripted and exploratory testing:

"To the extent that the next test we do is influenced by the result of the last test we did, we are doing exploratory testing. We become more exploratory when we can't tell what tests should be run, in advance of the test cycle, or when we haven't yet had the opportunity to create those tests. If we are running scripted tests, and new information comes to light that suggests a better test strategy, we may switch to an exploratory mode (as in the case of discovering a new failure that requires investigation). Conversely, we take a more scripted approach when there is little uncertainty about how we want to test, new tests are relatively unimportant, the need for efficiency and reliability in executing those tests is worth the effort of scripting, and when we are prepared to pay the cost of documenting and maintaining tests. The results of exploratory testing aren't necessarily radically different than those of scripted testing, and the two approaches to testing are fully compatible."

This makes sense to me. The messing around I do as I am figuring out how new things work and how I should test them is really a form of exploratory testing. And this may be the only testing that is necessary for very small applications. But with a bigger app like eduCommons it is clearly impossible to keep everything organized without scripted (preferably automated) tests.

Testability

Grig Gheorghiu presented at PyCon and then blogged about what makes software more testable? Here it is, shamelessly borrowed from his blog:

I mentioned a list put together by Michael Bolton, and summarized/enhanced by Adam Goucher in this blog post. Recommended reading, both for developers who want to add testing hooks into their software, and for testers who want to know what to ask for from developers so that their life gets easier (and if you're one of the unfortunate souls who have to deal with Java or .NET, this blog post by Roy Osherove talks about testability and pure OOP.)

Although our tutorial was focused on tools and techniques for implementing test automation, we also mentioned that you will never be able to get rid of manual testing. Even though the Google testing team says that 'Life is too short for manual testing' (and I couldn't agree more with them), they hasten to qualify this slogan by adding that automated testing frees you up to do more meaningful exploratory testing.

My experience as a tester shows that the nastiest bugs are often discovered by manual testing. But when you do discover them manually, the best strategy is to write automated tests for them, so that you'll check your application in that particular area from that moment on, via an automated test suite which runs in your continuous integration system.

You do have an automated test suite, right? And it does run periodically (daily or upon on every check-in) in a continuous integration system, right? And you have everything set up so that you're notified by email or RSS feeds when something fails, right? And you fix failures quickly so that everything turns back to green, because you know that too much red, too often, leads to broken windows and bit rot, right?

If you answered No to any of these questions, then you are not testing your application, period (but you already knew this if you took our tutorial -- it was on the last slide :-)