Published on Testing Experience on April 2013 By Shlomo Mark, David Ben-Yishai & Guy Shilon in collaboration with Testuff.

Abstract

In today’s world, a significant portion of development projects in software engineering follow the Agile development methodology. One major disadvantage of the Agile development methodology is incorporating testing. During a sprint, a specific number of small tasks is defined (Sprint Backlog), and it is hard to determine whether the time allocated for testing will be enough to both test all the items and fix all the errors and bugs found. In the sequential software engineering life cycle methodology, the testing phase either comes directly after the development phase in a well scheduled and defined manner, or is a separate phase utilizing two different environments but still only executed after the development phase ends. A project based on Agile methodology focuses on splitting the project into iterations; each defining its own set of tasks that combine to make requirements. In many cases, the unit tests are no more than another mandatory task within an iteration. In order to avoid schedule delays and other problems, creative testing must be given a special emphasis. Adopting an Agile-based development methodology must also result in an ‘Agile’ way of thinking and planning with regard to the various testing phases. In order to meet the changes in requirements and match the flexibility of a project based on Agile methodology, planned tests must be creative in order to allow adaptation to the changes as well as the schedule.

Introduction

Software life cycle models describe the different phases of the software’s life cycle and the order in which those phases are executed [1]. There are many software life cycle methodologies and each company should adopt its own, based on the needs of each individual project. The basic sequential pattern is as follows: requirements definition, analysis, design, implementation, and testing [1]. Each phase produces deliverables required by the next phase in the life cycle [2].

After formulation and approval of the SRS (software requirements specification) documents, comes the ‘Testing Life Cycle’ [7] phase in which we determine how the system will be reviewed and tested in a way that covers most of the possibilities for a system crash.

The testing life cycle is divided into three parts:

  • STP – Software Test Plan: This details the plan of action for the system tests – what we are going to test in the system and how we plan to test it (type of tests such as black box, white box, integration test, acceptance test).
  • STR – Software Test Description: This contains a detailed plan of the tests including the test scenarios, input data for each scenario, execution method, and expected results.
  • Final Summary Document: This contains all the test results from the testing phase, testing team recommendations regarding moving the system to the next phase, and a summary of the errors and bugs and
    their severity.

In sequential-based software development projects, system tests begin after the development ends. The tests themselves are executed by a specialized testing team belonging to either the developing company or by an external software testing company [1, 2]. This presents a big problem. Because testing starts after the development process, the developers are given feedback by the testing team close to the final project deadline and, in the case of a severe bug (as determined by each Product Owner individually per project), this can cause a major delay and/or a hasty ‘quick fix’ which reduces the quality. In many cases, delays in the development process will shorten the duration of the testing process because there was a need to meet the deadline, meaning the product will not be completely covered in terms of testing.

These problems and more [3], such as unexpected requirement changes and the need for more commitment from the team towards the project, require a change in approach. The current answer is that more and more projects are implemented using Agile methodologies. The problem with Agile methodologies is that testing is not done in a fixed pattern at the end of the development phase, but instead after each package or integration of packages [4, 5].

‘Agile development life cycle’ [6] is a term for several iterative and incremental software development methodologies and the manifesto of Agile methodologies contains four important principles:

  • Individuals and interactions over processes and tools,
  • Working software over comprehensive documentation,
  • Customer collaboration over contract negotiation, and
  • Responding to change over following a plan.

In Agile testing, there is a short feedback loop between the team members and the Product Owner which replaces sending official emails and talking to the business and developers via the Test Manager. Agile Testers are part of the cross-functional team, talk directly to developers, and have their say in all phases of the software development life cycle. With Agile testing you can influence developers to think about testability in their code and you can help understand and refine new requirements.

Testing is embedded into the definition of done in every sprint. Undone work cannot be released and classic cutting corners by skipping testing cannot happen.

Testing tasks are part of a story, the same as any other type of task, and anybody can pick up and execute a testing task. Design experts and developers need to think about the testability of the product.

Agile testing drives development by refining acceptance criteria and questioning stories during iteration planning. In Agile testing, you can use pair testing together with the developer or another tester. You can also help developers design tests or create automated tests from which both of you will benefit. One of the first things an Agile tester does is to tune the acceptance criteria and write straightforward test cases which can be used by developers to drive their development – TDD.

In the Agile development life cycle, early testing is highly recommended. Early testing means testing in phases of gathering requirements and design. Agile testing starts working on stories, evolves into test-driven development, and ends with automated acceptance testing on continuous integration for every new change committed to the source code repository.

There are several different methods for implementing Agile development in a project, such as XP (Extreme Programming), Scrum, etc. [6,7]. While each of the Agile methods is unique in its specific approach, they all share a common vision and core values. They all fundamentally incorporate iteration and provide continuous feedback to successively refine and deliver a software system. They all involve continuous planning, continuous testing, continuous integration, and other forms of continuous evolution of both the project and the software. They are all lightweight, especially compared to traditional waterfall-style processes, and inherently adaptable. What is more important about Agile methods is that they all focus on empowering people to collaborate and make decisions together quickly and effectively.

Extreme Programming is a discipline of software development based on the values of simplicity, communication, feedback, and courage. It works by bringing the whole team together in the presence of simple practices, with enough feedback to enable the team to see where they are and to tune the practices to their unique situation. Extreme programming teams use a simple form of planning and tracking to decide what to do next and to predict when any desired feature set will be delivered. The team produces the software in a series of small gully integrated releases that pass all the tests that the customer has defined. Extreme programmers work together in pairs and as a group, with simple design and obsessively tested code, improving the design continually so it is always just right for the current needs.

“Extreme Programming is obsessed with feedback, and in software development, good feedback requires good testing. Top XP teams practice ‘test-driven development’, working in very short cycles of adding a test, then making it work. Almost effortlessly, teams produce code with nearly 100 percent test coverage, which is a great step forward in most shops. (If your programmers are already doing even more sophisticated testing, more power to you. Keep it up, it can only help!) It isn’t enough to write tests: you have to run them. Here, too, Extreme Programming is extreme. These ‘programmer tests’, or ‘unit tests’ are all collected together, and every time any programmer releases any code to the repository (and pairs typically release twice a day or more), every single one of the programmer tests must run correctly. One hundred percent, all the time! This means that programmers get immediate feedback on how they’re doing. Additionally, these tests provide invaluable support as the software design is improved”. (www.xprogramming.com/what-is-extreme-programming)”

Scrum [6] is an Agile software development model based on multiple small teams working in an intensive and interdependent manner. Scrum employs real-time decision-making processes based on actual events and information. This requires well-trained and specialized teams capable of self-management, communication and decision making. The teams in the organization work together while constantly focusing on their common interests.

Based on a simple V model [9], a scrum project could have several levels of tests in every sprint [9]. In the lower level of each sprint, backlog item developers performed a unit test. On the next level up, product backlog (user stories), testers in the team performed system tests. And at the highest level, acceptance tests were performed by the customer against the project goals. After each sprint, the team additionally performed integration tests. Testers are requirements stakeholders and all of the above tests are performed against a specification. Each product backlog item is implemented and tested, and the team has two choices for the testing schedule – during the implementation of the item or at the end of each sprint.

Problems in the Agile testing process

From our experience in working with several different companies, the main problems in this development model tend to result from an inability to perform a focused series of tests at the end of the project. It is necessary to test each unit (sprint), and every time a unit is integrated into the product. Another key problem is due to multiple similar tests, as we often run tests more than once (one time per unit, multiple units), mostly during integration. It is also possible that, after a sprint and integration of a unit, the Product owner will decide to change the nature of the unit or its functionality, resulting in further repeated testing.

Because work is done in separate groups, with each group working on a different task in the Sprint, it is possible for one team to identify and solve a bug, and for another team to encounter and start to fix a similar bug, unaware that the other team has already found a solution for it and resulting in a duplication of effort.

In an ideal world, each sprint would be divided equally into a development (coding) phase and a testing phase. However, in reality, development (coding) usually takes more than half the sprint schedule, and, in addition, during a sprint it is possible to encounter problems such as not meeting the deadline and/or errors and bugs that take time to fix. This leads to a further reduction in the days designated for testing which can lead to the release of an untested (or not fully tested) unit or, at best, delaying the task to the next sprint.

Implementing Agile methodology is usually done at the start of development, where the working methods are better defined, and it is easier to implement automatic testing in such projects. The reason being that these tests are simpler and cause less difficulty in terms of communicating bugs and information between the groups, making it simpler to ‘commit’ (run) the automated test and wait for the results. As a result, the development teams incorporate automated tests into the development process, causing them to think they are covering all the needed tests. In such a scenario, the problem lies in incorporating manual tests, which are the primary source of bug detection, into the automated tests. This is due to the fact that manual tests require better communication between the development team and the testing teams, so much so that you might consider them to be a single team.
This is a tricky process, both for professional and personal reasons of the stakeholders. In addition, manual tests require the application/program to be ‘ready for release’, otherwise these tests would fail immediately. This level is hard to reach during the early phases of development and is usually attained at a later phase. Also, there is a basic methodological problem – still no ‘best practice’ method that clearly shows how to combine the manual tests into the Agile project. There are some guidelines and accumulated knowledge, but the process has a long way to go before gaining real proven experience in the field.

“Agile projects have some specific problems for testers that go beyond a lack of comprehensive documentation and the impracticality of planning much more than a couple of weeks ahead.” [10]

Possible creative solutions

Allot a set amount of time at the end of each sprint for unit testing. Should errors arise during the sprint that might push the allotted times ahead, consider dropping the current task and proceeding to the next in order to avoid ‘eating away’ your allotted testing time. Keep track of errors and bugs as well as the time it takes to solve them. Create and maintain a knowledge base based on previous knowledge to help identify possible big errors that would cause a delay in the sprint.

Avoid integration testing on small non-complex items from the sprint backlog and focus on integration tests for the whole unit. This can be made easier by focusing on test-driven development (TDD) [12], which allows our code to be ‘better’ and reduces the chance of bugs later on.

Consider spending resources on training your employees. This reduces the chances of their ‘missing’ something while testing and also expands their horizons to include different testing methods and ideas. Invest in a proper system for tracking bugs and bug fixes. This significantly reduces ‘double testing’, allowing you to keep track properly of what has been tested and fixed, and what is as yet untested.

A major factor in causing a project to fail and/or struggle is the desire to finish as much as possible in the least amount of time. Minimize the number of tasks for each delivery process and lower your expectations for each delivery. If everything is functioning well after the integration tests, add additional requirements. Emphasize the importance of each task with regard to two issues: the complexity of the task and the difficulty level of the task (Risk Factor). Based on this, we need to multiply our allotted time by the complexity factors to give us an indication of a ‘safety margin’ for each sprint. The complexity factors should be determined by each company individually, based on their previous knowledge of similar projects, their team’s level of expertise, and knowledge of the subject itself.

Make sure the requirements are broken down into the smallest possible tasks. While this creates more tests, they are smaller and allow for more extensive testing per task, reducing the chance of faulty/buggy items and, thus, reducing the likelihood of a bug during integration that causes schedule delays.

Summary

While it is impossible to create a fixed pattern for testing in an Agile project, due to the nature of Agile, if you take into consideration the possible solutions provided in this article, the chances of a critical time-consuming bug occurring during development are significantly reduced. As shown, a non-Agile project has a specific time period for
testing, usually after the development phase, which allows for easier planning and execution. Agile projects require constant changes to the project schedule (mostly due to requirement changes by the customer) and, thus, we have to adapt our testing patterns and behavior accordingly. Implementing the suggested solutions detailed here significantly reduces the probability of a module being released without testing and/or delays to a subsequent sprint.

References

  1. Stephen R. Schach. Object-Oriented and Classical Software Engineering. Eighth Edition.
  2. Roger S. Pressman. Software Engineering, A Practitioner’s Approach. Sixth Edition.
  3. Kai Petersen, Claes Wohlin, and Dejan Baca. The Waterfall Model in Large-Scale Development
  4. Dean Leffingwell. Scaling Software Agility: Best Practices for Large Enterprises
  5. Victor Szalvay(2004). An Introduction to Agile Software Development
  6. Ken Schwaber (2004). Agile Project Management with Scrum
  7. Robert C. Martin (2002). Agile Software Development, Principles, Patterns, and Practices
  8. Kit, E. (1995). Software Testing in the Real World. Wiley
  9. Nabil Mohammed, Ali Munassar and A. Govardhan. A Comparison Between Five Models Of Software Engineering. IJCSI International. Journal of Computer Science Issues, Vol. 7, Issue 5, September 2010
  10. David Talby, ArieKeren, Orit Hazzan, and Yael Dubinsky. Agile Software Testing in a Large-Scale Project. July/August 2006, IEEE
  11. James Lyndsay (2007). Testing in an Agile Environment
  12. Kent Beck. Test Driven Development: By Example

About the authors

Shlomo Mark is a researcher at the Negev Monte Carlo Research Center (NMCRC) and the Software Engineering Department, both at the Sami Shamoon College of Engineering (SCE), Be’er-Sheva, Israel.

David Ben-Yishai and Guy Shilon are research-assistants at the Negev Monte Carlo Research Center (NMCRC) and the Software Engineering Department, both at the Sami Shamoon College of Engineering (SCE), Be’er-Sheva, Israel.

This article is a collaboration between the three authors and Testuff, Ltd. a company which develops software testing SaaS solutions.