Go back to blog listing

Cloud applications launch faster. Can test automation achieve escape velocity too?

Can test automation achieve escape velocity too?

It would be difficult to find a viable company that doesn’t have any cloud migration strategy at all. 

Leading-edge firms will take an all or nothing approach to this migration -- insisting that all business functionality must eventually resolve into the consumption of some sort of elastic cloud, whether in a leading IaaS like AWS or Azure, or in on-premises or bespoke flavors of private cloud.

Even market laggards have a plan to move at least one application to an on-demand cloud instance -- dipping a toe in the water before taking the plunge. 

Businesses can accelerate development and modernize the delivery environment using cloud, but one thing never changes: Customers still expect software to function and perform as expected in production, no matter where it is deployed. 

They need applications to be highly available, secure and resilient, or they will go elsewhere. That leaves test automation as a critical gating function for success. 

When the cloud rate of release outpaces testing 

One Global 500 financial services firm, John Hancock, recently announced launching an estimated US$142M transition project to a private cloud IaaS with cloud services provider CGI.

Make no mistake, much of that budget isn’t cloud fees, it’s labor. Rearchitecting enterprise applications for the cloud is obviously no small task. A daunting amount of work is required to integrate, test and continuously validate the applications, and this work is often rehashed when maintaining the tests after each release.  

But that’s nothing new. Such rework costs certainly existed before cloud. 

What changed? The rate of release is increasing exponentially in a modern cloud environment. DevOps teams are using infrastructure-as-code (IaC) definitions, fast service-based integration and data feeds, automated deployment pipelines and run-anywhere containerization. 

At a KubeCon keynote, Airbnb ops engineer Melanie Cebula said her team is pushing out more than 20,000 containerized releases a week -- and that was a year ago! While most enterprises will likely never touch the speeds of the Netflixes and Airbnbs of the world, we’re still looking at a thousand-fold increase in deployment velocity for any company doing cloud environments right.  

So where does that leave the rest of the world, companies who weren’t born in the cloud? 

Any senior developer or test engineer recognizes that code itself represents a liability. The more code you write, the more you need to test that code -- and the more test code you need to write and maintain over time. 

Reaching escape velocity isn’t a matter of increasing the speed of deployments and releases anymore, if we can’t address the drag coefficient of maintaining the test code that accompanies it.  

Reaching escape velocity against change 

As companies embrace the DevOps movement, they turn toward powerful automation pipelines for continuous release and environment deployment automation in dynamic cloud architectures. 

Testing needs to be a first-class aspect of that pipeline. Fail to test software in a realistic environment early and often, and the resulting failures that appear in production may be too costly to fix.

Most testing and development groups use a combination of commercial tools and open source, when practical, for test automation and mocking of dependencies (aka ‘service virtualization’). The most popular open source web testing automation tool is Selenium, which allows testers to reproduce browser-based workflows through a system under test.   

While core elements of Selenium have been around for years, partner activity and developer contributions to the project have heated up lately. Over the last 3-5 years, Selenium has become a part of the toolchain for a majority of business dev/test teams. 

No matter how much code-level, integration and performance testing the software team runs, functional testing from a user perspective with tools like Selenium is still the endgame. Repeatably testing a web UI across all target browsers and devices is critical to success. 

Now, repeatability -- that’s where things get hairy. When the back-end business logic and data is dynamically represented in the web UI at runtime, an infinite number of anomalies occur on screen that break the Selenium test scripts, whether they were captured by browsing or manually modified by testers. 

For instance, items may load in different positions on the page, or in a different order, or contain data values or images that are outside of the expectations of the Selenium test script. 

Testers and SREs (Site or Service Reliability Engineers) can try creating custom handlers to account for false failures, adjusting the code to allow flexibility along certain parameters, but before long, it will become impossible to trust the results of any suite of manually maintained functional tests, due to underlying changes in the cloud back-end architecture that never stop generating inconsistencies in a dynamic web UI. 

Make testing as resilient as the architecture 

The best way to counteract the cloud’s constant outpacing of application testing? Apply the same DevOps principle of ‘automate everything’ to the maintenance of the test automation itself!  

While there are some proprietary automation tools on the market that cover the whole test lifecycle, there are also approaches that augment the capabilities of testers using open source tools. 

One major travel corporation was facing exactly such a challenge as they started migrating a business-critical loyalty program app off existing legacy server clusters to a reserved cloud application instance with public cloud overflow capabilities. 

The initial release was fully tested and quite successful, but as additional customers were added over the course of three more point releases, their first suites of Selenium tests were starting to fail at a very high rate. The labor cost of changing tests, as well as increased cloud costs due to redundant computing usage was starting to provide a negative management view of the project. 

Fortunately, they were able to turn to Parasoft Selenic a tool to enhance their Selenium test suites, applying an AI-based approach to interpreting the objects on the page, and ‘self-healing’ the tests to adapt to changing conditions observed in the web UI.  

This level of test resiliency reduced the firm’s test maintenance costs by as much as 75 percent, while allowing them to complete their go-live with Selenium and Selenic two weeks early with far less failure risk down the road as the migration marches on. 

The Intellyx Take 

Companies have clear intentions for moving applications to the cloud, but the consequences and costs of the move are far less clear.  

If you are going to modernize existing monoliths and release new functionality faster to meet customer needs, you still need to do lots of functional and regression testing. So much testing, in fact, that even the most skilled and efficient test engineers cannot code and run enough tests to keep up. 

Ensuring test resiliency -- by having tests that can test themselves, and heal themselves -- is the only way testing can keep pace with the rapid changes the cloud brings. 

Next up: Avoiding Open Source Drama: Mitigating the risk of developing with open source software. 

New call-to-action

Stay up to date