Published on Professional Tester on December 2012.

Make what you produce reusable – and reuse it

Dani Almog introduces his inventory-driven test automation approach

Every PT reader knows too well how elusive very successful test automation can be. Several aspects of both strategy and implementation require a lot of attention and resources. Many of these aspects have given rise to overall approaches based upon them, Here I will conceptualize and present mine. It is based on achieving reusability of test
artifacts and focused on their correct storage and integration with operated test automation infrastructure. I call the approach inventory-driven test automation.

When I attempt to explain the concept and benefit of maximizing reusability, I am sometimes asked if I am suggesting that, usually, testers have already done what they are trying to do and doing something new is not necessary. This is almost correct: most of the substance of any new contribution made by testing lies in the context and interpretation. Consider your testing activities in a typical day: how much of your action, operation, thinking and doing is really new or unique? When we, for example, design a test case we use our memory and past experience, rearranging existing knowledge into a new pattern. In other words we reuse, but we do it in way that is nonsystematic, unmeasurable and probably inefficient.

If you see a tester repeat the same action (eg logging in) five times during test execution you will seek a way to automate it. In the same way, you should not allow your testers to repeat the same actions many times during any of their other activities. Our goal should be to make whatever we produce as reusable as possible, and to reuse it as much as possible, so that the nebulous part repeated in our head can be minimized and our head used instead for innovation.

In order to make something reusable the first requirement is to store it correctly. This article will explain the IDTA approach to that, for the artifact types most test automators consider it necessary. I’ll move on to reusing the stored artifacts in a second article.

The test requirements inventory

This is derived from the customer/product requirements. These are decomposed as far as possible to identify the things testing will need in a tree structure (see figure 1). Of course as with any analysis it is not usually possible to be confident of getting this right first time; items will need to be added or restructured as testing proceeds. But at an early stage use the principle “if in doubt, put it in” as the more levels are created the more reusable the test
artifacts will be.

















Figure 1: part of the test requirements inventory tree for a login requirement

The test case inventory

The entries in this are created by aggregating the test requirements, again to the greatest extent possible. For example: an authorized user can use the application using any combination of the device interfaces and technologies in the test requirements inventory. Where the development methodology is based on business stories (as in agile for example) it may be possible for testers and developers to share work. The testers examine the stories produced by the developers and use them to make both the business story inventory and the test requirements inventory from which it is built more complete. The developers may do the same. It is neither necessary nor desirable for the stories used by testers and developers to be exactly equivalent; but the work of each group can help the other group to find things that have been missed. The developers’ business stories, based on the product’s functions and design, become far more complex than the testers’ business stories, based on the product’s wider attributes and the consequent needs of testing. For example, the simple business story above leads to the design of a login dialog (figure 2).








Figure 2: example design for login dialog



Considering this immediately expands and complicates the user story:

  • If the user clicks the “Log in” button and the “Username:” or “Password:” field is empty a dialog is displayed to the effect that both fields must be completed
  • If the username is not correct for an existing account a dialog is displayed to that effect
  • If the combination of username and password are not correct for an existing account a dialog is displayed to the effect that the password is incorrect. If this dialog box is displayed three times within any 30-minute period the account is locked (see BSnn user attempts to log on to locked account)
  • If the combination of username and password are correct for an existing account but that account is suspended a dialog is displayed to that effect, naming the administrator responsible for that account and instructing the user to contact him or her
  • If the password for the account was last changed more than 60 days ago a dialog is displayed to that effect which allows the user to change the password (see BSmm user changes password)

…and so on. It’s easy to see that as soon as design details are taken into account the complexity of business stories explodes. That is why, in many classical or theoretical testing approaches, it is considered best to analyse “pure” requirements. But test automation removes that option: the tests need to be built on the designs or they will not run. Figure 3, a set of coverage matrices with only one test and defect picked out, illustrates this: the small red arrows are needed to know which business requirement the defect impacts, but also to know which business stories, test cases and test execution sets may need to be re-executed and/or modified and/or extended to assure against that defect
after its repair and against other possible related defects. Now consider the situation where the test cases are themselves made up of a complex set of scripts and related artifacts which, for both better effectiveness and efficiency of testing, we want to reuse wherever that is appropriate. More, and more complex, arrows are needed to know that. Thus managing traceability – that is, relating tests to business requirements – becomes very difficult. This is the fundamental problem with which I hope IDTA will help.

Figure 3: classical testing coverage matrices

Unit testing as a source of testing artifacts

The unit tests created by developers could be a powerful resource: they represent a large amount of work, with good defect-finding potential, which could be automated easily. Being owned by development dooms this work to be used only once. I hope that one day someone will invent a method of using a unit test in two modes: isolated, enabling it to test an action regardless of its integration maturity; then integrated, enabling it to detect defects of all other kinds.

To achieve this we need a mechanism to transform the circle (test in isolated mode) shown in figure 4 into the square (connected test). Here the arrows represent assertions: when the test is isolated, they are injected from external input (by the developer, perhaps using a unit testing framework). When it is integrated, the assertions are injected by tests of other connected components. Nature and society provide us with many examples of such mechanisms.

Figure 4: unit test artifact transformation

Structural approach to test automation

Figure 5 shows a classical structural approach to test case definition, intended to enable maximum control over every aspect of the testing processes. It can be seen that the structured test case has five elements:

  1. Factors: preset variables whose values are to be controlled during the tests in order to enable the test to start at a pre-defined initial state
  2. Test flows: a list of activities and their relationships which describe the executable path tested by the test case
  3. Dynamic input: a component representing interaction with external entities
  4. Verification calls: external investigator entity that observes and documents selective
    states and occurrences represented by data items in order to anticipate, control and document defined behaviours of a test case execution
  5. Outputs: the actual outcome of the test case represented by data items.

IDTA proposes that at least three of these elements are good candidates for independent
storage and reuse.

Figure 5: classical structural approach to test case definition

Environment and factors storage

This could be manifested as a set of “buildup” procedures ready to be executed. That would require a tool that can build a test environment. The work now being done on virtualization, in particular cloud computing provisioning, shows that this is possible. I expect commercial offerings to begin to emerge soon. From that, it will not be a large step
to actual test environments that prefabricate and position themselves at the beginning of a required test flow.
A similar solution for that can be met by having the actual test environment pre fabricate ready for used – having the tested application already installed and positioned at the beginning of the actual needed test flow.

Injected data storage

Since many of test cases will need to inject data into the testing flow, this data ought to be stored and maintained, attached and coupled to the actual test case carrying the internal data flow structure. This is already widely done for “data-driven testing”. IDTA suggests the development of a new, more orderly mechanism to first store and later inject and maintain all external data a test case will use.

Verification calls storage

A verification calls inventory is based on a strategy to separate the actual VC from the test case containing it and view it as a snapshot focusing on identified data items: a manifestation of an external and
independent “test oracle”. Figure 6 shows an example structure that could be used to store the actual business knowledge of the development organization.

Figure 6: verification calls storage architecture

The business artifact inventory

Figure 3 shows a the test case inventory used as a building block for storage and maintenance of test cases. Now we come to the desired ability to reuse a test case as part of a different scenario. Most test automation tools do this by storing a duplicate of the test case’s formation separately. The approach in IDTA is to store an additional “amalgamation level” of artifacts which relate to the test case’s business context, facilitating easy access and reuse of reusable test artifacts. The full context scripts are stored, so that at reuse full control of all operational and functional aspects is available. The business artifact inventory is shown in context with the other IDTA inventory
hierarchies in figure 7. This example demonstrates the reuse opportunity of each element. All inventory items can be used as building blocks for higher hierarchies. The element is not duplicated but its connect, call, relate and inherit properties are maintained.

Figure 7: IDTA testing inventories

IDTA and agile

Enabling the collection of all artifacts developed along the way is an essential aim of IDTA. By developing the test cases (including unit tests) in parallel with the product development, both derived from the same business stories, at the end of each sprint the testing artifacts are ready to accompany the release into its next integration and implementation stage: a sprint regression test package is formed to accompany the ready feature. They are then
combined with the existing inventories (see figure 8).

Figure 8: sprint testing artifacts


About the writer:
Dani Almog is a lecturer and researcher in software quality and testing at Ben-Gurion University. He was previously director of test automation at Amdocs.
All his academic testing courses are developed and presented in co-operation with Testuff which provides the advanced SaaS test management tool used used by students. For more information about Testuff visit www.testuff.com.