When reading articles about Agile Testing the explanation often goes like this: Agile Testing means testing in an agile project environment. Followed by a lengthy description of what agile is (or is not). Hardly any further word on the testing aspect as such. Well, even though this might very well capture the essence of Agile Testing I would like to try coming up with a somewhat more concrete explanation. For this I will comare Agile Testing with a classical QA approach. Luckily – well, not sure if I should really say so – I have been working in different projects applying those testing strategies or some kind of mixture.
Probably I could come up with an endless list of bullet points discussing this topic. But I would like to focus on the following three aspects:
- Responsibility for Quality
- Automating Tests
- Documenting Tests & Test Management
So let’s take a look at those before coming up with some kind of conclusion :-).
Responsibility for Quality
Let’s start with a somewhat exaggerated statement:
Agile Testing means the team is responsible for the quality of the software. In a classical QA approach responsibilities are often moved back and forth between developers, testers and QA-managers.
This whole “the team is responsible” thing might sound a bit worn out, but I believe it still works (better) than other approaches. There is absolutely nothing wrong with having dedicated – or let’s better say well-trained – testers. But they should be part of the team and not separated. This means sitting together with the team (which is often referred to as the developer-team), attending the same meetings (e.g. daily standups) and attending discussions with the customer about the functionality. Especially the latter point will help immensely in understanding the purpose of the software and thus what to test and how.
When talking about development teams this should naturally include testing-experts the same way as some team member might be a database expert or the expert for any other specific technology, methodology or a matter expert. And as usual team members will learn from each other and support each other.
For the role of a dedicated QA-manager there is no easy fit in Agile Testing. Of course a team that might not be too experienced could need some help in organizing how to integrate “quality” in the daily work. Here some kind of consulting might be possible, but it must be really really concrete then. Potentially this role is split between a Scrum Master for the organisational part and a Product Owner for the understanding of the functionality.
Test automation is kind of the hard fact in Agile Testing. For sure there is also test automation done in classical QA, but probably in most cases not to the extend it is done in an agile project. Working in short iterations already implies the urgent need to re-test already existing functionality again and again to ensure nothing gets broken. This is something that is not only hard to achieve with manual tests, but it would be extremely boring. Furthermore this would imply the risk that people might loose attention when testing the same bit of functionality every two weeks.
It cannot be stated clearly enough: We still need manual testing! How this differs from the classical approach will be described partly in this section and in more detail in the next section.
Automating tests can be hard, really hard. But the bigger the project and the longer its lifespan the more they will pay out, literally.
There are lots of test-levels in a typical project and some of the must be automated and most of them should be. Let’s take a look at the foloowing list that does not claim to be complete:
This one is pretty clear and probably worth no discussion. In an ideal case a test first approach is used here and tools like Sonar are used to support the team. (Sonar should never be used as a management tool!) It is important to have a clear distinction here from the Integration Tests. Thus only testing on class level using mocking or stubs are done here.
Here a certain (technical) functionality is tested. For example some database access by using an in-memory database. This could also be used to test some algorithms spreading over multiple classes. Typically Integration Tests are also implemented using the Unit Test framework (in Java this would be JUnit) to be able to execute those tests quickly from the development environment.
This means testing a user story and thus a certain functionality of the software from a user perspective. This often includes automating access over a GUI; in most cases a web-based GUI. Implementing those tests can be difficult and the testing aspect should be already kept in mind during implementation. There are different open-source testing frameworks available to support implementing Acceptance Tests. Those are for example the Robot Framework, JBehave or Cucumber. A test first approach is sometimes preached here, but I have never seen this working. It is good enough if at the end of an iteration the tests are available together with the new functionaliy.
There are more potential test-levels like for example Performance Tests or Failover-Tests or others. Typically more specific tools might be used for those. Anyway it should always be kept in mind that with new releases these tests should be done again and thus a decent level of automation will also pay out here.
For me the only possible conclusion here is that automated tests are extremely important and an agile software development approach is hardly possible without those. This also means that the required time must be planned and the tests are part of any increment or sprint and nothing that can be postponed to a later stage of a project. This would be anyway a somewhat weird idea considering the purpose of automated tests.
Documenting Tests & Test Management
Let’s again start with a statement that – I would say – has quite some thruth in it.
Managers really love testplans. Once all tests are marked as tested the quality is approved and – even beter – well documented.
Well, I cannot recall a single project where writing (lengthy) testplans has worked out in the end. The reasons why this does not work are manifold and not limited to the following:
- There is the intention that “anyone” (often this includes the wish of “lowering costs”) can test software following a plan. No! The descriptions are never detailed enough or they are simply not up-to-date. Lots of discussions are needed and in the end this will result in low quality for sure.
- Often the problem of having proper and reusable testdata is neglected, which will lead to problems during test execution
- The process is simply not aligned with an agile approach in development. It focuses on one long final acceptance test, but not on continuously testing an application. And this not only for the first release, but also for forthcoming releases.
Automating tests can be an excellent documentation of the tests. And as part of the automation the problem of having valid testdata must be anyway solved. In addition there is a daily report on the execution of the tests. Personally I like the idea of combining automated tests with some Exploratory Testing by skilled “testers” (at least at that moment they have the role tester) or users of the system. Those re-testing could be for example done at the end of each iteration focusing on certain -potentially new – areas of the application.
Testing should be in the responsibility of the team in an agile project environment. Of course this requires that the team gets the required “resources” and support that is needed for testing (e.g. testing environments etc.). If there are impediments it would naturally be the task of the Scrum Master (or a similar role) to tackle those.
There is no guarantee that this will result in good (or even better) quality, but I think the chances are good that it will. Important here is that there is no “testing debt” (comparable to technical debt) accumulated in the course of the project. Quality and thus automated tests must be part of any sprint. It is nothing that can be done later. This is at the same time one big advantage here. Because when discovering essential problems at the very end of a project when doing “the big bang testing” is simply too late to still react appropriate.
Maybe it is time that we do not only have Devops, but also Devtests :-).