Automated tests are of upmost importance in agile projects. And automating tests means that those tests are developed; in a similar way than any other software is developed. This is a fundamental difference to manual testing, where tests are executed based on testplans in which those tests are described (good or sometimes maybe not that good). Automated tests again are typically much more than only recording and playing back some fixed scenarios. Automated tests are meant to define a setting that enables the team to easily and quickly test the different functions of an application with various different parameters. Ideally this is supported by the use of an agile test tool that offers already corresponding functionalities. But almost always there will be the need to implement individual extensions for such a framework to be able to test certains aspects of a concrete application.
Test automation means developing software that under some circumstances might have a similar – or maybe even bigger – size than the Software under Test (SuT).
Automated tests are software and software requires – ideally a good – Design. But ever since this article written by Uwe who is a collegue of mine I simply do not dare to talk of a kind of test architecture in this context ;-). On the other hand I am waiting already for a long time for a good opportunity to plait the following short story to some blog entry and now I will just squeeze this in here :-).
Some day an elephant is walking through the jungle when he sees an ape. The ape is standing below a tree and is jumping to reach some banana that is hanging there just out of reach. “What are you doing there?” asks the elephant. “I am jumping to get the banana.” says the ape still jumping. “Yes, but ape there is a long stick laying around on the floor, why don’t you use it?!” says the elephant. “I have no time.” answers the ape “I have to jump!”
Yes sure, the story has a weak point as the ape could as well climb the tree, but hopefully you get the idea. Test automation is a sotware project and it might be a big one. Starting to run headless here without considering a proper design will – with a high probability – cause big problems later on in maintaining and extending those tests. Though this does not mean a huge design upfront, but in the same rate with which the SuT is designed also the automated tests must be designed.
What good design for automated tests means depends of course on the concrete use case and can easily fill one of the the next articles in this blog :-). But for sure it is not the worst idea to abstract the tests as much as possible from the technologies used in the SuT. This way the tests as such stay stable even if the implementation might be changed in parts. This approach is very well supported by agile testing tools that are based on the principle of Keyword-Driven Testing.
Good design of the automated tests will also help to find those problems in test automation early that can only be solved by changes in the SuT. Such interdependencies are true and they can be solved much easier in agile teams than in a classical project setup where responsibilities for implementation and quality control are strictly seperated.
As a summary it could be said that the effort needed for properly designed automated tests is for sure time spent well that will pay back in the end.