Tips and Tricks to Make UI Recording Tools Less Painful
UI recording tools can be frustrating, but they don't have to be. With some quick tips and tricks to make the edge cases less painful, you will be happier scaling your UI test automation.
Testers adopt test automation, specifically UI test automation, to speed up repetitive testing actions and focus manual work on finding new and interesting issues. With UI test automation, you can define the different paths that you want to take through the application, and, instead of a human doing the job, the machine can validate the application in a continuous and automated way.
The primary way to build an automated test is by writing code. This can be unfortunate, because a tester that knows how to traverse the application by hand has to either learn how to write code using a framework like Selenium or use a proprietary tool like UFT to create an automated workflow that mirrors the one they were doing by hand. This can become challenging quickly, since code by nature is error-prone and writing automated tests can require a fairly sophisticated skill set.
To ease the challenge, many tools (Parasoft's UI test automation tool included) include the capability to record and play back a web scenario. These tools are great in theory, but a big problem is that most recorders fail to record and playback the exact scenario 80% of the time. This is an unfortunate reality, but there's a light at the end of the tunnel. Here are some of the common challenges faced by all recorders, and some tips and tricks you can adopt to fix them.
When the web application behaves differently during playback than when you recorded it...
Inconsistency in web application behavior is the number one challenge you will face with record-and-playback technologies, and can be quite difficult to overcome. Here are a few different reasons this can happen, as well as some techniques to help you diagnose and potentially fix the issues.
Issue #1: The application was recorded in a particular state, even though you didn’t know it.
Software developers use browser cookies to store session information for a particular user of a web application, or to track sessions between related applications. Cookies can be used to retrieve information about the user without needing to store the information in a back-end system. A cookie records data such as the fact that the user’s browser has previously visited the web application, or information about the user’s current session.
For automation purposes, existing cookies can easily cause a problem. Test automation frameworks such as Selenium often start the browser in a “clean” state where all cookies have been removed. If you recorded a test with cookies being present, the web application will behave differently during playback than it did when you recorded it, and the recorded test may fail.
One example is session data stored in a cookie that allows a user to stay logged into a web application between browser sessions. If you start recording when you are already logged in, no login information will be captured in the recorded test case. During playback, no cookies will be present so the application will immediately go to a login page, and the test will fail.
The same kind of issue will occur when elements appear on the page during playback that didn't appear during recording. Many web applications show a notification on the first visit by a user, such as a GDPR notification about the application’s usage of cookies, or a popup div that displays a special deal, sale, or newsletter signup opportunity. Playback of recorded scenarios in this case will fail because the additional page elements did not appear during recording and so the test does not know how to interact with the additional elements.
Here’s how to make that less painful...
Solution: Record and playback in incognito/private mode
Most browsers can be set to launch in incognito/private mode. When running in this mode, cookies and profile information are removed, so the web application will behave as if it is your first visit and you are not logged in. Record your test scenarios using a browser that has been launched in incognito/private mode, and you will have a clean baseline for your recorded test scenarios. In Chrome, the setting looks like this:
Here's a great article that shows how to configure this mode in browsers other than Chrome.
Once you have created your test script, you typically can allow it to play back in normal mode. However, there are certain cases where it may be beneficial to play back in incognito/private mode. One example of this would be a web application that uses your current location to customize the experience on the page. I have seen cases where the current location retrieved during playback of a Selenium test differed from the location retrieved during recording (I’m not sure why). Playing back in incognito/private mode will prevent the browser from getting your current location and will stabilize your test scenario. In Selenium, you can instruct playback to happen in incognito/private mode -- check out these handy links for Chrome and Firefox.
Solution: Set specific cookies
This might be useful if you require certain cookies to be in place to ensure the playback starts with the web application in a specific state. Here’s a great forum post that outlines the process for capturing and setting specific cookies on playback.
Issue #2: Changing screen sizes
The modern web application development practice of responsive web design causes problems during automated test creation. In order to make a web application that works for both mobile and desktop, developers create responsive applications that change depending on the screen size. You see this when you resize a window and the UI completely changes. Imagine what that does to an automated test!
If you record your test in a maximized window and then play it back in a half-size window, page elements could be hidden or obscured due to the browser size differences. A similar effect happens when the resolution on your playback machine is different than it was on your recording machine. Either way, it has quite the impact on your test automation.
Here's how to make that less painful...
Solution: Maximize your browser on playback
You can use Selenium commands to maximize the browser size:
- Use Selenium WebDriver API (works in all browsers supported by Selenium):
- Add this option for Chrome that will save you a step in your test by configuring the WebDriver Chrome Options:
- Check out this useful tutorial for other browser size commands
Maximizing the browser ensures that the page is full size, and elements are arranged on the page for that configuration. Assuming you recorded with the screen size at maximum, the elements will generally be consistent between record and playback, even if the machine doing playback uses a different resolution.
When tests fail sporadically or behave differently due to timing issues...
As a human, when I use a web application, I look for specific elements and I interact with those elements when they are “ready.” In an automated test, it’s a little more difficult for the machine to know when to do things vs. when to wait.
The automated testing tool executes a test scenario as a series of steps. For example, when executing a shopping cart test against Amazon, it may come to a point where the next step is to add an item to the cart and then click on the cart button. The problem is that the cart button might not be available immediately after the item is added, and there may be a moment where the icon changes. The test script needs to know exactly the right amount of time to wait, or conditions for which to look in order to move forward. The configuration in the test that handles this are typically called “wait conditions”. Recorders have a very hard time understanding what wait conditions to create in the test scripts, so you have to manually build sophisticated wait conditions into your tests.
Here is how to make that less painful...
Solution: Use explicit waits
Explicit waits are wait conditions that are set on an individual element, that instruct Selenium WebDriver to wait until a specific condition is met - such as waiting for an element to be present or visible on the page. The wait condition specifies a maximum amount of time to wait for the condition to be met.
Use explicit waits whenever the test attempts to find an element. If no wait conditions are used at all, the test will fail if the element is not immediately present in the proper state. Selenium WebDriver also supports implicit wait conditions, which are set globally for an entire test scenario and used whenever the test tries to find an element. However, implicit waits are not recommended. See the Selenium documentation on explicit and implicit waits.
When using explicit waits, exceptions can be thrown by Selenium that interrupt the wait condition - but you actually want the wait condition to continue in those cases. You handle this by configuring the explicit waits to ignore particular exceptions. Some common exceptions that need to be ignored are NoSuchElementException, StaleElementReferenceException, and ElementClickInterceptedException.
Here is a good example of an explicit wait that ignores one of these exceptions:
WebDriverWait wait = new WebDriverWait(driver, DEFAULT_WAIT_FOR_ELEMENT_TIMEOUT);
When element locators break due to changes in unrelated parts of the page...
Web application recorders try to build a good element locator for each page element referenced in the recorded test, but in many cases the locators that get recorded are not ideal. For example, the recorded locator may reference dynamic information that changes each time the page is visited, or be prone to breaking when unrelated parts of the page are later modified.
Common examples might be:
- IDs that change every time you visit the page, which is very common in Salesforce and SAP applications:
<div id="gwt-uid-198">Click here for more info</div>
- Text that differs based on the specific user that is logged in:
<a href="/logout.jsp">Logout user (bob)</a>
- Position-based locators, such as an xpath that will no longer work if page elements referenced by segments in the xpath are later moved or otherwise modified in the web application.
Here's how to make that less painful...
Solution: Create intelligent locators
It is typically pretty easy to identify element locators that contain dynamic information. As soon as you play back your recorded scenario, these locators will fail because the dynamic information on which they depend is different. To handle these cases, you need to find uniquely-identifying attributes of the page element that you can use in the element locator instead of the dynamic attribute that was recorded.
One way to do this is to manually navigate in your browser to the page where the element appears, and then right-click on the element and choose the menu option to inspect the element (called Inspect in Chrome and Inspect Element in Firefox). This will take you to a DOM structure that will allow you to view the underlying DOM and HTML code for the element. In this view you can identify other attributes by which to identify the element. Common attributes to look for are “class”, “name”, “title”, and “alt”, but these may also be dynamic and you should choose one or more that uniquely identifies the element. In some cases an attribute that is best used to uniquely identify the element is found on a parent element, so you may want to create a relative xpath such as the following:
Another approach is to use tooling that can give you various locator options, from which you can choose one and then tweak it as necessary. Some specific tools that come to mind are SeleniumIDE and TruePath, both of which create multiple different locators that can be used to identify an element. You can use a locator as is, or choose one and modify it to suit your needs.
A third option is to use Parasoft Selenic to generate recommendations for better locators when tests fail due to bad locators or to self-heal broken locators at runtime. Selenic will use historical data from previous test executions as well as information about the current state of the page to suggest a set of different locators that you can use, along with its confidence about each locator.
A Few Extra Tips
In addition to what I’ve already mentioned, there are a number of other common situations that arise when creating test scenarios from recording. Here are a few additional tips to make using UI recording tools less painful.
Recording Hover Actions
It’s traditionally difficult for record and playback tools to capture page elements that appear when a user positions the mouse over another element. The hover action typically does not get recorded, but interaction with the element that appears does get recorded. When running the test scenario, the hover action does not happen and the next page element defined in the test scenario never appears - which makes the test fail.
To make this less painful, pay attention during recording to where hover actions reveal additional elements, and when you see one of these cases, click on the element instead of simply hovering over it. This will cause a click action to get recorded into your test for that page element. If you want the test to hover instead of clicking on the element, change the click action in the test to a hover action after the fact. Here's an article to help you out with this.
Some Actions Not Recorded
From time to time while recording, elements just don't get recorded. Quite often this is due to the UI framework building the elements on the page in a unique way.
To make this less painful, keep track of the specific actions you are performing on your web application while recording. After recording, identify in your Selenium script where the action should have been and add it in manually. You can identify the locator to use by navigating in the browser to the page where the action was missed, and then use one of the browser extensions mentioned earlier to capture a locator and manually put it into the test.
Creating test scenarios using UI recording tools can be painful, but it doesn’t have to be. Leverage these techniques to make you successful with your UI test automation practice. Happy testing!
Nathan is Director of Development at Parasoft. He and his teams develop product capabilities in the areas of UI testing (Selenic), API testing (SOAtest), service virtualization (Virtualize), and unit testing (Jtest). He has been with Parasoft since 2000.