Antoine de Saint-Exupery once stated that “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.”
Truer words were never spoken. Although Antoine, who died in 1944, never had to work with software application testing tools in a world of perpetual evolution and changing consumer tastes. I wonder – if he were alive today, would he apply this same quote to software testing management?
I don’t know.
I do know that software testing accounts for the overwhelming majority of development costs (80% according to some sources). So perhaps in strictly accounting terms, the case can be made that additional testing delivers diminishing returns over time. There comes a point when a product is “perfect enough” not to warrant the expense of further testing. You won’t realize any extra profits from more scripts or debugging.
But we’re not accountants. We face a very different mandate. So from one tester to another – can software testing ever become excessive?
Software Application Testing Tools – When to Call It Quits
We’ve explored variations of this theme over the past couple of months.
- Software testing is never finished if you don’t have the right tools for the job.
- Even if you have the right software application testing tools, are you even asking the right questions to begin with?
- You have the right tools and are asking the right questions, but are you (the tester) expert enough to combine the two for maximum impact?
But let’s assume that you meet all of the above criteria. You have the expertise, the tools, and the questions. Is there ever a time when you can put software testing aside and call a project “complete enough?”
Well, yes and no.
Product release dates essentially force us to put our software application testing tools on a temporary pause as we face the market’s music and hope that people like what they see. These stops don’t come from us – they come from on high (i.e. a boss or the marketing team). But what if we ran the show?
That’s a good question.
According to Dutch computer science guru, E.W. Dijkstra, “[You] can write a million tests, and still miss the one test that would catch that bug that crashed the production server and forced you to work the whole weekend. No amount of testing will ever guarantee your system will work correctly….”
Ouch! It almost makes you wonder, “What’s the point – why even bother testing at all?”
Well, aside from job security or pride in one’s work, continuous software testing makes the product more and more reliable. Cops will never make the streets 100% safe, and multivitamins will never completely remove the need to see doctors, but they’re very good security measures to have in place.
So we should never give up since perpetual improvements lead to perpetual gains. But giving up and going overboard (i.e. excessive) are 2 very different things. Can we ever do the latter?
That depends on how you define excessive. If your software testing management is not based on sound reasoning, you could be wasting a lot of valuable time. So yes, excessive testing is definitely possible.
Part of the secret behind intelligent, non-excessive software testing management lies in our ability to segment the problem and make reasonable assumptions.
In other words, we can’t anticipate every need, every input, or every known result. We can, however, follow established guidelines and knowledge, previous experience, the wisdom of the crowds and make educated guesses to test representative samples.
The universe of potential bugs is infinite. But at any given moment in time, the list of “likely” defects is much more manageable. If you use your software application testing tools to target this smaller list, I don’t think it’s possible to become excessive. In fact, it’s just good business (oh wait – we’re not accountants). I mean, it’s just common sense.
Excess Has No Place in a Job that Has No End
Software development is a never-ending process, and thus, software testing has no finish line either. According to Dijkstra,
“In the past 10, 15 years, the power of commonly available computers has increased by a factor of a thousand. The ambition of society to apply these wonderful pieces of equipment has grown in proportion and the poor programmer, with his duties in this field of tension between equipment and goals, finds his task exploded in size, scope, and sophistication. And the poor programmer just has not caught up.”
He wrote this back in 1971 (one year prior to winning the Turing Award), and admittedly, his quote focuses more on the developer than on the tester.
But the lesson is still the same.
Given the exponential growth in both computing power and consumer demand over the past 40 years, we’re even further away from excess than we were when Dijkstra first wrote the above. When applied correctly, the tools and insights we use today are more important now than at any time in the history of software testing.