No doubt that automation testing is sexy and impressive – it seems to be a great cure for the annoying process of more laborious testing. Imagine that you can write the crappiest, laziest piece of software and a magic automation tool painlessly finds any and all defects and helps you fix them.

Sadly, this is not possible. But you probably already knew that. Nothing compares to a good human tester with extensive experience, advanced skills, and basic intuition. However, the human approach is not without its drawbacks either. Human testers are more expensive, and that’s only if you can actually find and train them. Keeping them working at the same company for a long time is even harder still.

But this post is not about the tester – it is about automation testing, when to use it, and when to avoid it. After all, automation testing is not always a bad thing. You don’t have to pay for training, and you never run the risk of sick days or the possibility that your automation tool will look for greener pastures. I would just caution anyone who embraces automation as a silver bullet – a panacea to fix all ills and completely replace actual human testers.

However let’s cover when automation is a true benefit. I can think of two distinct scenarios – “Not GUI” and “Boring Parameters.”

Not GUI
This is the classic area in which automation thrives. Developing server side components, protocols, and devices doesn’t require an actual person to send raw commands to the device and test for functionality. Of course, developers naturally do this during the development stage, but to require a physical tester on-site whose sole purpose is to type “sudo blah blah blah” is a phenomenal waste of time if you have an automation tool that can perform the same duties. Automation testers never get bored, rarely make mistakes, and can share their results instantly.

Boring Parameters
This scenario applies to testing in which algorithms generate different results based on different parameters. A good example is the complex algorithm used to calculate your monthly cellular bill (minus all the sneaky fees that invariably require a “human” element). This type of testing can involve more parameters than NASA uses in spaceship design (270,000 according to Bruce Willis in Armageddon). In these cases, automation tools are better equipped to handle the vast number of boring and monotonous inputs than a typical human tester can comfortably manage.

The approach that many automation tools use is something I call Sanity GUI. Such tools provide you with a great framework for recording your application GUI, allowing you to run it and generate scripts that the tool can replay ad infinitum.
The rationale behind this approach is simple – you can do a “real” run in your application, and the tool will record it and allow you to replay this run at any time in the future to ensure that everything still works properly. Moreover, the automation tool will never tire or complain, managers cannot “fault” you for not doing the tests, and the tool itself is actually quite enjoyable to use.

I have no argument against this general rational. On the surface it makes perfect sense. However, in my own testing experiences across a broad range of applications and environments, I have discovered that “Sanity GUI” brings certain disadvantages. In effect, when using automation tools in the above example, I often end up spending more time and money. In nearly all cases, recording is a breeze, script repeating works like magic, result comparison is a snap, and my managers love the neatly generated reports. And on top of all this, the technology is getting better and better, bringing improvements on nearly all fronts. So what is the problem exactly – what is there not to like?

Simple. The problems typically start on Day 2.

After recording the scripts, it becomes increasingly cumbersome (i.e. expensive and time-consuming) to keep them updated. Each time the GUI is modified, the scripts routinely fail. Moreover, there is usually no direct benefit from running the script through the automation tool since one typically tests the application from the GUI. Very rare is the automation script that will discover something that I hadn’t already discovered on my own through normal application testing.

But for me, the worst thing about automation testing is the de-emphasis it places on living, breathing testers. Not only do I waste time using a tool that finds defects I already know about, but automated tools also find “false” defects due to the constantly updated GUI. Every minute spent sorting through obvious defects or false positives is a minute taken away from real testers who run analyses using the actual application to find “real” defects.

When all is said and done, hours of automated testing may need to be followed up with “hands on” human testing, which begs the question – “why automate in these specific cases at all?” It’s like paying people to clean your yard with the full knowledge that you’ll have to go over their work when they’re done and essentially start from scratch as if they hadn’t done any work at all. This is both a waste of money and time.

To Automate or not to Automate?
As I mentioned before, automation is not a bad thing – in most of the industrialized world, automation is a buzzword that connotes efficiency, reduced costs, and speed. But when deciding whether or not to use automation, we should remember that “automation” itself is not a goal. Rather, it is the efficiency, speed, or reduced cost that automation can potentially bring when designing or producing a superior product. In other words, the product itself is the ultimate aim, and one should decide what role automation can play in moving this aim closer within reach.

In our industry, testing (and not automation) is the ultimate goal. In times when automation can yield superior tests with lower costs, better results, and fewer backside problems, then such tools are clearly beneficial. However, if using automation tools results in more clean up, more human testers (after the fact), and an inferior or costlier product, I caution you to exercise good judgment. Real testers might represent higher upfront costs, but if the end result is a product that makes users happier (and your company more profitable), then clearly automation does not represent the best course of action.