What happened so far?
Following our first blog post about how to get started with UI testing, this post will explore how we can get more comfortable with our test suite. So let’s assume we already wrote some nice Selenium UI tests, but aren’t satisfied with maintenance costs and stability.
Writing maintainable element selectors
So first of all, let’s have a closer look at the test code. As our tests should stay green, even if implementation details on the website change, we need to take care how we select elements on the page. We can either use fixed ids or names for the elements or select them in their context. But this may lead to very long and error prone selection expressions like:
webDriver.findElement(By.css("div > ui > li > div > p > input"))
This selection is very closely linked to the HTML structure of the web page, so to improve the selection we can start to use fixed ids or css classes like:
Another approach is to give the affected element a specific data-description attribute, so we can select it independent of its implementation like this:
Therefore, we need to give the element the defined data description as a HTML5 data-* attribute. In this example:
<input data-description="time-per-user" ... />
This has several advantages. First of all, we can select specific elements without knowing the context in which they appear and we can recognize from the html code which elements have an UI test that might be affected by our code changes. So we can also easily find this test, as we can search for the keyword. Besides, we are independent of css class name refactoring or structural code changes, which makes the use of data-description very handy. Of course, you can also use xpath for the selection, especially if your selection might be more complex.
Failing UI tests can be caused by a great variety of circumstances, so it’s very important to easily find the root cause of failing tests. Therefore, it’s necessary to explore the situation in which a test failed. This will tell us if the web page was still loading, throw an error or some parts in the code changed. So how can we get a good representation of the circumstances? First of all, a screenshot of the moment the test failed can give us a good first impression of possible reasons. Therefore, we create a new file on the file system and include the screenshot by the following command:
Depending on the operating system we are running on, we need to make sure that the file name is not only unique, but also not too long. In our case we used a maximum length of 150 characters and wrote a little helper function to make sure the file name isn’t too long, but still unique.
As a screenshot might not tell the full story of a test failure, it’s good to save also the html output, so we can explore why, for example, a css selector didn’t find the element it was supposed to select. The command we can use therefore is:
You can find the full failure capture source code on github.
Please try again later
Due to latency, varying loading times and unstable browser behaviour, tests may fail unexpectedly and turn red without a need to fix anything. Manually rerunning the test will show that it was just an incident caused by a not 100% stable environment and doesn’t require any further action. As broken tests may result in interrupted build pipelines and developer’s effort to manually restart the tests, we should try to stabilize our environment so this happens as rarely as possible. As it’s often hard to reduce latency and variation in loading times, we can at least save the time to manually restart the tests. Therefore, we add a test listener that will automatically restart failing tests, so we can easily check, if a feature is really not working correctly or if just the environment had some temporary difficulties.
So how do we do that? Using TestNG, we can extend the class TestListenerAdapter and override their methods to set on each test an implementation of the IRetryAnalyzer interface which will take care of repeating the test. Moreover, we only mark tests as failed, if they failed several times, otherwise we mark them as skipped and remove them from the list of failed tests like this:
To active the TestListenerAdapter implementation we annotate the base test class with the following line of code, as all other test classes will inherit the annotation from the base class:
Adding those two classes and this annotation to our test suite looks like a very small improvement, but it will save us a lot of time and effort. Furthermore, it will contribute to the cost-benefit ratio and reduce the amount of false negative error reports.
To see the retry in action have a look at the TimeSavingTest.java. It contains a test render_slowmotion(), which will open the requested page with 10 seconds delay which will result in a failed test and then set the delay to only 2 seconds delay which is below the maximum 5 seconds configured in the WebDriverManager.java. So as the first test will fail the RetryAnalyser.java will restart the test, which will then succeed und restult in a green build, even though one test first failed because of (simulated) loading time variation.
Recapitulating, we can state, if you would like to improve your test suite, do the following:
- use data-description for element selection (css or xpath)
- capture a screenshot and according HTML for fast failure analysis
- integrate an automatic retry for failing tests to get a more stable test suite
You can check out the example source code on github.
Get in contact
If you have any questions or suggestions, feel free to comment this blog post