Improper diagnoses are bad for business. This is most apparent in healthcare where “false positives” result in hundreds of billions in wasted dollars every year.

Worse still are the complications that emerge when real problems go untreated. Patients become sicker and sometimes die because physicians fail to address the true reasons why those individuals are in the hospital in the first place.

Similar parallels exist in other industries – notably software testing. Although death doesn’t usually factor into the equation, false positives can still produce serious consequences.

Let’s take a look.

The Dangers of Excessive or Improper Software Testing

Software testers are to IT what diagnosticians are to healthcare. We’re responsible for finding and reporting bugs so that developers can address these defects and create a more stable build.

But what happens when we “pinpoint” bugs that don’t exist or don’t need fixing?

A lot of bad things can happen. Below is a quick list of problems – ranked by severity:

1. Wasted Time
We end up wasting time checking for things that don’t truly matter. Equally bad, we create confusion by reporting defects that can’t be reproduced on the developer’s end.

2. Backward Progress
False positives force development teams to fix bugs that don’t truly need fixing at all. In fact, unnecessary changes can actually make the product less stable – especially when you factor in all the interdependencies that exist with sophisticated builds.

This issue is particularly pronounced when software testers try to fix the problem without consulting the development team. Read our earlier article to understand why this wall should never be breached.

3. Untreated Problems
Finding, reporting, and fixing false positives can prevent software teams from addressing real problems in the software. You’re not simply devoting resources to the “wrong” thing – you’re ignoring those aspects most critical to success.

4. Eroded Cohesion
The above issues can delay product launches and anger clients. But the problem of false positives extends much further than that. Waste often creates friction between testing and development teams. This breakdown in cohesion can affect other projects – both now and in the future.

Should Software Testing Become More Conservative?

Fixing this trend isn’t easy. After all, it’s our job to think outside the box and look for unusual problems in unusual places. And we often embrace methodologies like exploratory testing that are uniquely suited for this task.

But if our goal is to restrict the severity and frequency of false positives, should we adopt more conservative QA methodologies instead?

Should we start thinking inside the box?

We would argue no.

False positives can be dangerous. But becoming overly cautious inhibits our ability to find and address true positives – i.e. bugs that exist and need our attention.

So what’s the fix?

How to Avoid False Positives in Software Testing

There is no universal formula that will work in all situations. Software is dynamic, with each build facing unique constraints.

But there are steps you can take to dramatically reduce the impact that false positives have on quality assurance:

1. Better Goals
From the very earliest stages, all relevant team members should agree on what the product is supposed to do. The more clearly you define this goal, the more accurate all future tests become.

2. Better Communication
Testers and developers must be on the same page throughout the entire process. Disagreements can (and will) occur. However:

  • Information must be shared.
  • Communication must be open.
  • Responsibilities must be defined.

3. Better Documentation
Improved communication makes it easier to clearly document the nature of each defect you find along the way:

  • If the product still performs as expected (based on the team’s pre-defined goals), there’s a good chance that this bug doesn’t need fixing. No false positive found.
  • If the product doesn’t perform as expected, further investigation is necessary. Keep looking until you’ve found the true positive holding you back.

Do you agree with this analysis? Or are there better ways to reduce false positives (and increase true positives) in software testing?

Share your thoughts down below.