Apple traditionally runs a very tight ship. Its products perform as promised, and leaks about future releases are relatively rare. So when the tech giant does stumble, its missteps are very noticeable.
I’m, of course, talking about the recent iPhone 5 mapping app debacle. One of the blogs that initially helped break the story received over 45,000 hits in just a few days – a testament to how quickly errors can become public (see my earlier blog post on how poor testing can quickly damage a company’s image).
The sheer volume of complaints forced Apple to drop its claim that this is “the most beautiful, powerful mapping service ever.”
At Testuff, we release our software testing tools for both the Mac and PC – we really respect Apple’s products for their ease and reliability, so we were naturally surprised to discover how error-prone the new mapping technology was.
But to be honest, we weren’t that surprised. After all, our entire business model revolves around providing SaaS QA testing tools in an industry where 80% of software development costs go to detecting and fixing bugs.
So we got to asking ourselves, “How did this happen, and could better software test management tools have prevented this situation?”
How the Apple Debacle May Have Happened
One can only speculate as to how this unfolded. Like I said, Apple runs a tight ship whose inner workings are jealously guarded. But here’s a likely scenario.
They designed their mapping software, using data supplied by TomTom (the same service that provides mapping data to AOL, Samsung, HTC, RIM, and even Google).
They then tested the mapping service in a few major urban centers and at different satellite offices around the globe. This approach makes sense on the surface, but:
- Major cities probably have the most reliable landmarks, making it difficult to sniff out bugs.
- Many of Apple’s offices are in remote locations, making it difficult to create errors in the first place.
I don’t blame them for using this approach. You only have so many people and so many hours. It’s simply not possible to test an app of this sort at the global level.
Next, they probably ran any number of tests using any number of programs. I can only assume that these tests didn’t turn up major bugs.
And yet, a few bugs obviously did indeed slip through. Whatever tools they were using might not have been the right ones for the job.
So Could Better Software Testing Tools Have Helped?
I honestly do believe that better tools could have helped Apple fix its bugs. But before delving into this further, it’s important that I outline our general philosophy regarding software QA testing tools.
We embrace flexibility as a matter of survival. Although we regularly solicit feedback from our users, it’s impossible to know, in advance, how every client intends to use our SaaS testing software. And thus, we’ve designed our test management suite to be as open and customizable as possible.
So “yes,” I honestly do believe that under the right circumstances, better software QA testing tools could have helped Apple avoid its mapping woes.
But what are the right circumstances?
Effective Software Testing Requires Asking the Right Questions
Software QA testing tools are only as powerful as the questions asked.
To show you what I mean, let’s take a quick trip to Jurassic Park.
The scientists running Jurassic Park wanted to ensure that the population remained stable, so they only created female dinosaurs. No males means no offspring. Simple enough.
The scientists also programmed their electronic monitors to alert them if and when a dinosaur died or escaped. In other words, the trackers periodically counted the population to ensure that all 100 dinosaurs were present and accounted for.
There was a problem with this approach, however. Some of the dinosaurs switched sexes (they used frog DNA, which allowed for gender changes in extreme situations). As a result, the dinosaurs began mating. When the electronic trackers did their periodic sweeps, they found all 100 dinosaurs each time. But in actual fact, there were 150+ dinosaurs on the island.
In other words, the trackers were programmed to ask the wrong question. Instead of determining if 100 dinosaurs were present, they should have been asking, “How many dinosaurs are there?”
And this is probably the type of situation that Apple faced.
Its mapping software passed all of the original tests with flying colors. But the software QA testing tools weren’t programmed to ask the right types of questions. Apple’s testers could have asked the right questions to better use their tools and find defects.
Stated somewhat differently, there are any number of tests you could run that would have successfully detected bugs. The solution lies in asking the right questions and selecting a flexible and customizable enough platform to put those questions to the test.
Agree? Disagree? Comment down below.