Automated software testing is a pretty hot topic recently. But still, there are a lot of people who don’t know why is it important to maintain automated software tests. What is the automation of software testing? How to maintain it properly? What are the advantages? And what can go wrong… You will find the answers to all these questions in the article below.
First of all, I’d like to introduce you to the idea of automated software tests. In brief, it’s about writing a script that will simulate the possible actions of the end user of the application. Such as clicking on the page, filling forms or validating things that are displayed on a website. You have to remember that automated tests will never replace the work of an experienced manual tester. So why actually we’re performing automated tests?
Why do we perform automated software tests?
Because it detects regression
Manual tests allow us to compare an application with the client’s requirements and expectations. The manual tester knows the domain of an application very well and can verify if there’s a regression in a new version of an application. If the application is frequently updated, a tester will perform his regular actions during testing. It means that he will do the same actions over and over again. That’s the case where the automated tests are helpful. The main goal of creating automated tests is not finding new bugs, but it’s rather checking if regression appears.
See also: Test automation – Free video tutorial
Because it allows moving quicker with the release process
Implementation of automated software tests is very time-consuming. It requires QAA to have a very good knowledge of the application. The consequence in testing paths which are important for a client (of course, we can automate something, but it won’t necessarily cover the business demands). But once created, an automated test can accelerate the application release process. Running this type of test can verify if the main requirements of the client are met with the new version of an application. In the meantime, manual testers may check new features instead of checking basic paths.
So what can go wrong?
Basically, when the automated software tests are planned – nothing should go wrong. But as we know – nothing is that simple. I’ll introduce you to the three cases which happened in our team during the maintenance of automated tests.
Wrong areas of an application were automated
Apparently, when creating test scenarios, not enough time was spent to familiarize with the tested application. It happens when automatic tests are only written but they’re not used. This means that areas that do not require a lot of work are automated. In this case, we can boast that a lot of tests were written in a short period of time. Let’s take a provisional application of a store. Why would we use the automatic test to check whether the logo of a store is always displayed in the same place if we didn’t test a process of product purchase? When creating automated tests, it is worth working hand in hand with a manual tester who can identify areas that are prone to regression.
Automated software tests were not stable
It’s a bit depressing. But sometimes it happens that you write tests which have real business value, detect regression, but from time to time, for some unknown reasons, they appear to be unstable. It takes time to verify the test results and check whether the regression was actually detected or if the test was incorrectly implemented. Then, your tests lose their credibility, and it makes the development team skeptical about their results. Eventually – it happens that they are just ignored.
Automated software tests were not maintained
This is the worst thing that can happen. The tests were written, they worked for a period of time, but nobody kept them. Literally – no one checked whether there were any important changes in the application. It’s quite a complex one therefore, I will focus on it separately, in one of the next paragraphs.
What can be changed in an application during 5 months of maintenance
A lot. Really. Especially if we look at the application from the level of previously prepared automated test scenarios. That’s what happened when I joined one of our projects. I knew that there were well-written automatic tests used and they have a real business value. The development team trusted them and were included in each and every newly prepared release candidate. But as it happens in design life – a wave of new features arrived and somehow no one had time to maintain our e2e gem. And from version to version, more and more tests stopped being green until they stopped passing at all.
Trust me, that’s the worst. No one maintained these tests, and therefore they were simply forgotten.
There are no things that can not be repaired!
That’s what we’ve been dealing with in the last few weeks. The team was repairing existing automated tests. We’ve prepared them, the tests started to be green and returned to their former glory. That’s why I’d like to present you some useful tips that have helped us in this process and may be useful at the time of carrying out the maintenance.
Oh, this reCaptcha…
The first of the problems we’ve faced was the introduction of reCaptcha on the logging screen in the second version of the application. Its implementation practically eliminated the possibility of testing application’s main feature which is logging in. Therefore, the first idea was to create a test environment switch. It had to disable the captcha display. This idea has met with great opposition because it would be bending the application code for the needs of the tests. That’s why we’ve decided to use the approach suggested by Google.
Due to the fact that you can set some key values for reCaptcha in the test environment, we’ve introduced the keys which allow all the actions. That’s why our bot can click on captcha in the same way it’s clicking on a checkbox and simply move on. However, you must remember that the captcha is displayed in a separate iFrame, that’s why you have to switch it. Let’s check the example. If the application is written in Angular, you need to remember to disable the “waiting for the response” alert for all the Angular applications which are under the automated tests. In the case of a protractor, it’s enough to set the waitForAngularEnabled () flag.
Once the logging screen tests result turned green, another problem appeared. It was the two-step verification. Since we don’t want our tests to depend on external services (if the external service stops responding – our tests will stop working), we’ve prepared a mock for our test environment. It received every phone number and for confirmation took the last six digits of the number. This allowed passing the entire logging and registration process without any problems.
Scenarios are gradually passing the tests, but almost half of them is not working. It’s the time for verification what’s causing the tests failures. In this case, domain knowledge is the most important. Only someone who knows the application is able to verify whether the tests are failed due to the changes in the application or rather because of regression, which has not been noticed before (the first profit from the test reporting). It’s a tedious job which often requires working with documentation. When we finally distinguish regression and changes in the application, we can update our tests to be compatible with the current state of the application. E2E tests are very sensitive to changes. Changes in tests or the work of developers (like the changes in the homepage) can lead to confusion while maintaining tests.
Some tests are still not passed
In this case, the fault may be found in the test data itself. If the test database has been prepared for testing purposes, you should review the accounts that are used in tests. It may happen that during the next iteration the required permissions for different types of users change. For example, the lack of a new group of users for one account may cause the test fail. It means a lot of debugging to find the solution. Therefore, during this process, it’s worth preparing a separate account for each test. Then you’ll be sure that the tests are reliable and independent of each other at all times.
The framework that was used to create tests was updated
This can have pros and cons. In our case, these were mostly good changes. Using newly added functionalities, we can write gherkin files better. It’s because, for example, the steps that click on the page elements received newly added internal waits.
See also: Software testing trends for 2019
As you probably noticed while reading this text, it took some time for the tests to start working again. Of course, it brought a lot of benefits. It allowed detecting the regression which appeared 4 months ago (sic!). And increased the quality of the product. However, you should remember that if the tests are kept on a regular basis it will save a lot of nerves. Your work on verifying the application wouldn’t be such arduous. And most of all – it wouldn’t take that much time. Therefore, automated tests should be checked/updated after each iteration of the application. Writing automated tests is only half of the battle.