There exists a funny irony in the software testing world. As quality assurance professionals, we routinely spend months testing products for clients, but we generally dislike testing the platforms we use to run these tests.

The reason is simple.

Finding the right test management solution carries a host of startup and switching costs, including:

  • Researching the options
  • Importing the data
  • Learning the new environment
  • Configuring the platform

As software testers, we don’t simply want the right tools for the job. We want to minimize the transition and learning curve as much as possible. Every minute spent getting set up with a new platform is time taken away from our core work (i.e. testing software products).

Over the years, we’ve published a series of articles on:

  • Different ways to rate software QA testing platforms (click here)
  • The pitfalls of becoming too focused on features, bells, and whistles (click here)
  • The importance of cost and convenience when selecting testing tools (click here)

But although popular, these posts discuss the theory behind choosing software test management solutions. And many of our readers asked us for a more practical guide – a simplified formula that can help distill a sea of options into a more manageable pool of candidates.

What follows is a 34-point checklist of tips for selecting the optimal test management solution for your team. When using this checklist, we recommend scoring each of the points below on a scale of 0 to 5, with 0 meaning the attribute is non-existent and 5 meaning that the attribute is exceptional. Let intuition be your guide when rating the sections below, but remember that each score should reflect an action (i.e. how well the platform performs) – not a feature (i.e. does the option exist or not).

1. Installation and Maintenance Effort

How quickly can you get up and running? And how much ongoing maintenance is required to keep the testing platform operational?

  1. Server Installation and Configuration
  2. Client Installation and Configuration
  3. Upgrading from Version to Version
  4. Backup and Restoring Capabilities
  5. Ease of Importing Data from Current Test Sets
  6. Disaster Recovery Plan Capabilities

2. Learning Curve for New Users

How quickly can a new user become familiar with the platform?

  1. Adding and Running Tests
  2. Reporting Defects
  3. Adding and Deleting Projects
  4. Inviting New Testers

3. Writing Tests

How intuitive is the logic of this environment, and can your team create and customize new tests with ease?

  1. Ability to Write N-Step Test for N-1 Results
  2. Ability to Built Test Suite Hierarchy
  3. Support for Adding Test Attributes
  4. Support Rich Text Format for Test Descriptions
  5. Ability to Reuse Tests Again and Again
  6. Ability to Use Same Test in Several Suites

4. Planning and Executing Tests

Once a test is written, how effectively does the platform execute the script?

  1. AssignTests to Single or Multiple Testers
  2. Schedule Due Dates for Test Executions
  3. Support Run Configuration for Test Executions
  4. Rerun Failed Tests
  5. Automatically Manage Test Execution Times
  6. Automated Reporting of Test Results

5. Reporting of Defects and Errors

How well does the platform log and report defects?

  1. Report Defects for Specific Tests
  2. Report Defects for Specific Steps within Tests
  3. Multimedia Attachments (e.g. Files, Screenshots)
  4. Send Defect Report to Bug Tracker Tool
  5. Synchronize Defect Status with Bug Tracking Tool

6. Requirements Management

How effectively can you organize, verify, and update changing requirements – both within your team and with clients?

  1. Link Test to Requirement
  2. View Total Test Coverage
  3. Import Changes from Requirement Tool

7. Reporting and Sharing

Once a test is executed, how easily can you share the results with other team members or clients?

  1. Generate Test Execution Report
  2. Generate Lab/Cycle/Tester Report
  3. Send Reports to External Users
  4. Send Reports by Email

Additional Test Management Criteria

Any platform that scores well using the above checklist will likely be able to handle the majority of projects. But sometimes this minimum threshold isn’t enough. Depending on personal preferences or client expectations, you may need to explore additional criteria, including:

  • Performance. Does the tool perform at a reasonable speed? Are the results consistent over time? And is the platform able to handle large numbers of tests or data sets?
  • Availability. If using SaaS tools, how available (and reliable) are the servers? Is there a published track record or guarantee regarding uptime?
  • Security. How comfortable are you with the tool’s security policy? Do you feel confident that your data will remain safe?
  • Pricing. What is the pricing model? What is and isn’t included? Are there hidden fees? And is the cost comparable to other software testing tools of equal value?
  • Backup. How easy is it to backup your data? Are your data in a format that works well with other tools or for other purposes? Are there any additional backup options that come standard with the service?
  • Support. Can you easily reach technical support by phone or email – 24/7? What is the average response time? And are you satisfied with the level and quality of the support?
  • Upgrades. Is this a static environment, or can you expect continuous improvements over time? If the latter, how frequent are these improvements? And are they free?

As mentioned before, these additional criteria are optional. And they should only be factored in after a potential test management solution has already satisfied the primary checklist outlined earlier. This order ensures that you’ll select an optimal platform using the fewest possible resources.