There exists a funny irony in the software testing world. As quality assurance professionals, we routinely spend months testing products for clients, but we generally dislike testing the platforms we use to run these tests.
The reason is simple.
Finding the right test management solution carries a host of startup and switching costs, including:
- Researching the options
- Importing the data
- Learning the new environment
- Configuring the platform
As software testers, we don’t simply want the right tools for the job. We want to minimize the transition and learning curve as much as possible. Every minute spent getting set up with a new platform is time taken away from our core work (i.e. testing software products).
Over the years, we’ve published a series of articles on:
- Different ways to rate software QA testing platforms (click here)
- The pitfalls of becoming too focused on features, bells, and whistles (click here)
- The importance of cost and convenience when selecting testing tools (click here)
But although popular, these posts discuss the theory behind choosing software test management solutions. And many of our readers asked us for a more practical guide – a simplified formula that can help distill a sea of options into a more manageable pool of candidates.
What follows is a 34-point checklist of tips for selecting the optimal test management solution for your team. When using this checklist, we recommend scoring each of the points below on a scale of 0 to 5, with 0 meaning the attribute is non-existent and 5 meaning that the attribute is exceptional. Let intuition be your guide when rating the sections below, but remember that each score should reflect an action (i.e. how well the platform performs) – not a feature (i.e. does the option exist or not).
1. Installation and Maintenance Effort
How quickly can you get up and running? And how much ongoing maintenance is required to keep the testing platform operational?
- Server Installation and Configuration
- Client Installation and Configuration
- Upgrading from Version to Version
- Backup and Restoring Capabilities
- Ease of Importing Data from Current Test Sets
- Disaster Recovery Plan Capabilities
2. Learning Curve for New Users
How quickly can a new user become familiar with the platform?
- Adding and Running Tests
- Reporting Defects
- Adding and Deleting Projects
- Inviting New Testers
3. Writing Tests
How intuitive is the logic of this environment, and can your team create and customize new tests with ease?
- Ability to Write N-Step Test for N-1 Results
- Ability to Built Test Suite Hierarchy
- Support for Adding Test Attributes
- Support Rich Text Format for Test Descriptions
- Ability to Reuse Tests Again and Again
- Ability to Use Same Test in Several Suites
4. Planning and Executing Tests
Once a test is written, how effectively does the platform execute the script?
- AssignTests to Single or Multiple Testers
- Schedule Due Dates for Test Executions
- Support Run Configuration for Test Executions
- Rerun Failed Tests
- Automatically Manage Test Execution Times
- Automated Reporting of Test Results
5. Reporting of Defects and Errors
How well does the platform log and report defects?
- Report Defects for Specific Tests
- Report Defects for Specific Steps within Tests
- Multimedia Attachments (e.g. Files, Screenshots)
- Send Defect Report to Bug Tracker Tool
- Synchronize Defect Status with Bug Tracking Tool
6. Requirements Management
How effectively can you organize, verify, and update changing requirements – both within your team and with clients?
- Link Test to Requirement
- View Total Test Coverage
- Import Changes from Requirement Tool
7. Reporting and Sharing
Once a test is executed, how easily can you share the results with other team members or clients?
- Generate Test Execution Report
- Generate Lab/Cycle/Tester Report
- Send Reports to External Users
- Send Reports by Email
Additional Test Management Criteria
Any platform that scores well using the above checklist will likely be able to handle the majority of projects. But sometimes this minimum threshold isn’t enough. Depending on personal preferences or client expectations, you may need to explore additional criteria, including:
- Performance. Does the tool perform at a reasonable speed? Are the results consistent over time? And is the platform able to handle large numbers of tests or data sets?
- Availability. If using SaaS tools, how available (and reliable) are the servers? Is there a published track record or guarantee regarding uptime?
- Security. How comfortable are you with the tool’s security policy? Do you feel confident that your data will remain safe?
- Pricing. What is the pricing model? What is and isn’t included? Are there hidden fees? And is the cost comparable to other software testing tools of equal value?
- Backup. How easy is it to backup your data? Are your data in a format that works well with other tools or for other purposes? Are there any additional backup options that come standard with the service?
- Support. Can you easily reach technical support by phone or email – 24/7? What is the average response time? And are you satisfied with the level and quality of the support?
- Upgrades. Is this a static environment, or can you expect continuous improvements over time? If the latter, how frequent are these improvements? And are they free?
As mentioned before, these additional criteria are optional. And they should only be factored in after a potential test management solution has already satisfied the primary checklist outlined earlier. This order ensures that you’ll select an optimal platform using the fewest possible resources.