I used to work for a company which had a few groups of testers, each of them used on a separate mission on a need-basis. You would see a project manager go about asking “where’s my testing group?”, trying to figure which of the groups was assigned for his project. There were arguments of course to change this method, and move to dedicated testers for projects, or at least for a project manager, but it wasn’t as obvious as it might first sound.
Management of the testers time was better when each was assigned on a need-basis. Therefore the costs were handled better. Testers were challenged more frequently with new assignments and projects, therefore they were kept happier, and productive. Sure, they had to learn new stuff, new apps, and knowledge they had previously wasn’t always used (on future similar projects), but is that enough to give up the advantages? The discussions went on, no clear cut.
I was reminded of this type of testing groups, when recently asked if there’s a connection between the quality of testing, and the distance between the testers and developers (physical and procedural). Let’s thing together.
- Testers “in the cloud”
- An outsourced testing team
- A separate testing team within the company
- Integrated testing team, included in a project team
Each of these options represents a different set of challenges, and there are pros and cons for each. We’ll leave it as a fact, and not discuss which is better (anyway there’s no definite answer, and it depends on a range of factors, which vary in type and importance in different companies).
So, we’re not discussing the quality of these options, but rather how each of them might affect the quality of testing.
Testing quality factors
I’ll risk it, probably bringing on me many (angry) comments, and list the factors contributing to quality (warning: partial list, serving the cause of this post):
- Testers knowledge of the tested software, and what has changed, how long are they working with it
- How much the testers are involved, care about, namely in connection with the Company, how much the testers feel as part of the project
- Working environment: How much interruptions and pressure the testers have
- Flexibility: Ability to add testers in rush project time
- The compensation, benefits, rewarding of testers
Looking at the two lists (proximity and quality factors) we can observe a possible relationship between the location of the testers, their place in the process, and the quality of testing. A team assembled ‘on the Cloud’, of testers from different places, in different time zones can be cost-effective and fast, but will score low on other quality factors such as involvement and knowledge of the tested software. This might lower their total testing quality score.
On the other end of the line, a team of testers which is practically a part of a project, working with all other participants (designers, developers, etc.) seems to be more likely to get higher grades on these latter quality factors, but might prove to cost much more, be less diversified and less flexible. So testing might be better, whatever that is (subject of another post…), or not.
You can compare each factor and each group yourself, and see how the correlation translates for you.
Of course, this doesn’t mean that there’s always only one option to choose from. Each company and each software project can make a decision best for their needs, their budget, their preference of what’s more important for them at that point of time. How they define better testing is crucial here. It can lead to choosing testers in different ways (as happens in the real world, where we see all options used). More than that, many will try to combine more than one way, and try to enjoy the benefits of each, managing down the problems of each. The main take here is to be aware why and how to make a smart decision.
Agree? Disagree? Wish to share your experience? Comment below.