Test Automation is an inherent part of today’s modern software development. It is the foundation that supports short release cycles and processes like DevOps. Automated tests are written on different levels testing different aspects of an application. The most common levels of testing are unit-, integration- and acceptance-level testing. And while automated tests are typically working very well on unit- and integration-level, failure is just waiting around the corner when it comes to acceptance tests. Those are usually equivalent to UI tests. Here automation is getting complicated, time-consuming and in the end often a “little bit” frustrating when things start to go south.
At codecentric we have been working with and/or evaluating a lot of different testing tools already. Personally I like the Robot Framework a lot. But we also looked quite in detail into JBehave and Cucumber amongst others. All of those tools offer a broad set of testing functionality, which is founded in the fact that tests are basically implemented in a programming language. While this is of course a strength, it also makes those tools hard to use by “pure” testers. Often already installation can be an obstacle here. New functionality might only be testable once a developer has extended the corresponding test code. This work then is in conflict with feature development. And to be honest: Most developers like implementing new features more than implementing code to enable new acceptance tests.
It is time for something new!
This is where new cloud-based testing tools are entering the stage. Naturally those tools are focusing on UI tests, which is also often the main – and the hardest – part in acceptance testing anyway. In the following we will have a closer look at Usetrace. Another candidate would have been Functionize, which we might still take a look at later.
As with all cloud-based services, getting started is extremely easy, as there is no need for any local installation. After registration of a new account, Usetrace welcomes us with the above shown screen. Of course we are happily following the polite invitation to create a new trace and thus our first testcase. As usual I start by jumping into doing things and keep watching any tutorials for later. Let’s see how this works out.
We are creating our first project by entering the base URL of the application to test. As so often our codecentric homepage is used for a first test. In a real project that would of course be the URL of some dedicated test server running the application under test.
For sure there are same concerns at this point of time. Testing a site that is publicly available is no problem, but what about test environments that are behind firewalls or use basic authentification for security reasons? For sure this has a certain potential for problems, but on the other hand these topics are tackled by the Usetrace network instructions. In the end one probably needs to do some project-specific evaluation first if in doubt.
The “Trace Editor” opens up for our new project and we can start to record our first testcase. A preview of the page under test shows up in the central part of the editor. We can start recording and the individual steps are shown on the left hand side of the editor.
So far I have not been a big fan of recording test cases, but the recording and editing really feels intuitive. Furthermore the traces are promised to be robust against minor UI changes. Of course this remains to be seen. But after literally five minutes I had a very basic testcase clicking a few pages and checking for specific content on one of the pages.
The trace (testcase) can be executed directly from the editor. The results of all executed traces can of course also be seen. Execution can be triggered for different browsers. In the free trial and the Starter edition Firefox and Chrome are supported. Internet Explorer is only supported in the Pro and Enterprise editions. An overview on the pricing and the different features of the available editions can be found from here.
Results are shown for all traces (OK, currently there is only one) in combination with the different browsers. Up to here I can really only say: Thumbs up, I like it! This kind of minimal evaluation is of course always easy to do, but it was never that easy with any other tool.
In the project settings it is possible to define global parameters for all traces. This includes:
- Number of automated retries on failure.
- Maximum concurrency for builds.
- Default timeout for steps.
- Additional delay between interactions with the page.
Those can be already quite helpful to tackle some known network delays.
Of course test execution must be scheduled. Usetrace supports different CI Servers (e.g. Jenkins or Bamboo) for this. Integration is quite simple as execution can be triggered by curling a certain URL. Alternatively it is also possible to use the integrated scheduler of Usetrace.
This is probably one of the most important features of every test tool. It allows to repeat certain test steps as part of another test. The typical example for this is to login to some website. Usetrace supports modular tests with routines. With this feature it is possible to reuse one trace as a step in another trace. The following screenshot shows how to select a routine.
It might make sense to use some kind of naming convention for reusable traces to easily find them from the dropdown-box. Once a step/routine is included it is possible to run only that step from the step menu. This is a very convenient way to continue with the test then from that point on (thanks to the Usetrace developers for pointing me to that feature).
The above screenshot shows how short a test can be that continues from the place our first trace was navigating to. Of course the major advantage of reusing test steps is easier adaption to changes in the tested application. If there is for example a major change in the login form then only that trace needs to be re-recorded and not all the tests that require a login. And that would probably be all tests anyway.
Once test execution is finished, a summary is shown. This contains some quite high-level information as shown in the following screenshot.
But it also goes down to quite detailed information showing all the Webdriver commands executed during the test run. This might be helpful for some troubleshooting. It should be kept in mind that fixing problems in automated UI tests can either be done in the test itself, but often also by slightly modifying the application under test to make automated test possible, easier and/or more stable.
Time does not allow to have a detailed look at all features Usetrace is offering. But some interesting features are still listed in the following to get a more complete view on the tool:
- Testing of mouse hover events.
- Definition of step-specific wait times.
- Running tests against a local instance of the application under test using a tunneled connection.
The practical test
Unfortunately it is not possible to show any screenshots here as I was implementing some tests for the project I am currently working on. The test server is secured using basic authentication. This was no problem using the https://USER:PASSWORD@my-application.com syntax for the base URL. As this is a cloud-based application, testing starts with processing a login form. Then navigating to a certain page, filling a form and submitting it. Afterwards a very simple test is done to verify that no error has occurred.
Again the whole exercise took no more than 15 minutes. Of course some problems might show up when diving even deeper into the tests. But still the ratio between results and invested time is really extremely good. This should more easily enable an evaluation of the tool in projects considering these kind of UI tests as potentially useful.
The pure speed with which Usetrace enables the user to start implementing UI tests is really impressive. The recording has worked flawlessly during this evaluation and is quite intuitive. It remains to be seen how resilient the recorded tests really are when it comes to (minor) changes in the UI of the tested application. Of course it is not possible to perform any combined tests like triggering some action in the UI and then checking the results in the database. But how often is that really done when using other test frameworks?
In the end one potential problem can be the execution of tests on some external server. That might be in conflict with some data privacy protection. On the other hand, if only testdata is used, this might not be a real issue. The second potential problem is shared with all cloud-based services: The data is given away and one must have the trust that nothing bad will happen (data loss or the service is terminated). For this I would consider some kind of export a cool feature. This must and probably cannot be executable tests, but maybe at least some kind of test description derived from the recorded tests. This way one could always have a local backup of the test cases and steps as such at hand.
Update 11.09.2017: Actually there is an export feature in the settings section. There it is possible to export the whole project to JSON. I had overlooked that one initially, but I got a kind hint from the Usetrace developers. Thanks for that!
Usetrace for sure is a potential candidate when evaluating a new tool for UI tests. It was really interesting and fun checking it out and I somehow have the feeling this will not be the final article on Usetrace on our blog :-).