So in a average IT project something like acceptance test comes up sooner or later. Which is a good thing because we want to be sure about the functionality that is provided by the software actually works. So we make acceptance tests and show the results on a dashboard. Most people agree that acceptance tests are critical in providing resilient software. But people also tend to agree that acceptance tests are expensive. They take some time to run (10+ min. in bigger projects) and they take extra time to create. This is time not spent on the actual creation of functionality. So we need them, but we need to be careful.
A whole different problem is software not providing the expected behavior or functionality. This is something Acceptance Test Driven Developement(ATDD) tries to solve. ATDD is something that originated from the test driven movement, although Kent Beck in 2003 thought it was impractical. ATDD still gained momentum, and it has its benefits. By defining the tests before actually building the software ATDD provides more clarity about what needs to be created.
Other benefits of ATDD are:
- You know when and if functionality is provided without manual testing.
- Forces careful thinking about the functionality.
And ofcourse there is a drawback:
- You need to invest more time before creating the functionality.
Maybe there are more drawbacks of ATDD. I know that Acceptance tests themselves have some drawbacks. It makes sense to write your acceptance tests first before starting the actual coding. Maybe not for small and simple things, but definitely for the large and complex.
Implementing the code for running the test descriptions should take as little time as possible. We want to implement this before the functionality so we see a red bar. For this we use tools that translate these descriptions. The descriptions should be readable for the tool, but we would like to be as free as possible. Often the syntax used for these descriptions are sentences starting with Given, When and Then which orginates from Behavior Driven Development (BDD) approach invented by Dan North and Chris Matts.
Next to being free in our way we write our tests a test framework should support us as much as possible in writing tests quickly. Which means the following according to me:
- Not a lot of coding needed before a test runs.
- IDE should support my preferred test description.
- I can generate some code based on the test description.
- The tool should run the tests in a convenient way.
- Not a lot of boilerplate code needed to setup.
- I can get support from a community.
- I can see the internals and improve on it (Open source).
- I can integrate the tool in a build pipeline.
- The tool provides libaries or integration with libraries that can test a certain UI , API or data
This is quite a list of capabilites for a tool. A small team, including me, of codecentric wanted know if there are any frameworks available that allow us to write tests faster and thus prevent headaches. The following acceptance test frameworks score highly on the capabilities I mentioned.
- Robot Framework
Although we tried to look at all the acceptance test frameworks briefly we probably did miss some. Cucumber is part of the list and I already use it a lot. I am more curious about the other frameworks which maybe allow me to write tests faster.
Robot Framework looked very promising and I studied it in more detail. Concordion, Gauge and Jbehave are very nice frameworks but we looked at them only briefly because of time constraints.
I really like the Robot Framework it’s inital setup is quite easy using Java and Maven. This is how a simple Maven setup looks like:
1<project xmlns="http://maven.apache.org/POM/4.0.0" 2 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 3 xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> 4 <modelVersion>4.0.0</modelVersion> 5 6 <groupId>nl.cc.dev</groupId> 7 <artifactId>RobotATDD</artifactId> 8 <version>1.0-SNAPSHOT</version> 9 10 <dependencies> 11 <dependency> 12 <groupId>com.github.markusbernhardt</groupId> 13 <artifactId>robotframework-selenium2library-java</artifactId> 14 <version>126.96.36.199</version> 15 <scope>test</scope> 16 </dependency> 17 </dependencies> 18 19 <build> 20 <plugins> 21 <plugin> 22 <groupId>org.robotframework</groupId> 23 <artifactId>robotframework-maven-plugin</artifactId> 24 <version>1.4.6</version> 25 <executions> 26 <execution> 27 <goals> 28 <goal>run</goal> 29 </goals> 30 </execution> 31 </executions> 32 </plugin> 33 </plugins> 34 </build> 35</project> 36
This is an overview of the test project in my IDE:
The calculcatePage.robot is a test description for a web page with a calculator which should be in the directory robotframework. The FancyLib.java contains a class with methods that can be called by the test. You can run the tests with the command ‘mvn verify’.
The test cases in calculatePage.robot can look like this:
These test are quite readable I think (sorry about the boasting) but still I would like the ability to leave out the settings and only show the test cases.
Another big help are the large number of available test libraries for using in the Robot Framework tests. This is only a small listing of libraries:
- Windows GUI
More libraries can be found at the robot framework site . Other people at codecentric already wrote a lot about the Robot Framework so if you want to know more I really recommend reading their posts .
Less time wasted on acceptance testing is not only about using great tools it is also knowing what to test and what not to test. I get the idea of trying to test every part of the software end to end, and in some critical software it is even demanded. But often resources are scarce and the certainty provided by full coverage of ATDD does not really cover the cost.
A lot of acceptance tests does also not mean integration and unit test are to be neglected. A anti pattern for testing is reversing the well known test pyramid making it an ice cream. The problem with the ice cream is that acceptance tests are not suited to test negative paths. So what if service X fails in because writing to a file fails, if so we want a certain logging. In an integration test or unit test this easy to test. In an acceptance test this is more challenging. Feedback from an acceptance test is also less useful for developer to solve bugs. Which brings us to the fact that acceptance tests are more fragile than unit tests because they are quite dependent on environment.
Talking about a ice cream cone, unit testing of the frontend, which seems a bit double when you already have acceptance tests which validates through the UI, should not be ignored.
So to prevent acceptance tests to be a black hole for time don’t go for full coverage but focus on the most important functionality. Take some time to choose the best framework. Be aware how much time you spent writing and running acceptance test and try to use ATDD it will likely improve the whole development process.
Dein Job bei codecentric?
More articles in this subject area\n
Discover exciting further topics and let the codecentric world inspire you.