Go back to blog listing

Testing Alone Is Losing the Battle

But there may be one thing that can change it all...

Herminio Vazquez, an IOVIO consultant, shares how he worked with ING Mortgages Netherlands to help modernize the delivery of their finance applications by taking full control of their test environments with service virtualization.

It appears that the secret to a successful software development team is to deliver competently, continuously, and on schedule. One would say that digital transformation is unlike nature, a process where time follows a different rhythm. Instead, it's like a train that doesn't have stops. And if you miss it, game over.

What's Going On?

Over and over, we hear the words, time to market, richer customer experience, more features, extended capabilities, and harmonic journeys. In summary, change. Everyone is in the change business, in the change- fast business to be precise. Change has become more and more important for organizations and their teams.

Change has become so ubiquitous that IT environments have operations with Run and Change teams. If budgets allow it, an experimentation group where new ideas and innovation take place.

Change Is Everywhere

The reason change is relevant, is because through change we open new opportunities, discover new approaches, and eventually evolve. However, change also brings uncertainty and risks. The risk business is an article by itself, but to keep it simple, there is one old friend that helps us when dealing with change and risks, testing.

Testing allows us to validate our expectations, to confirm that our features, customer journeys, epics, stories, or requirements (depending on your methodology) are fit for purpose, and are complete.

But something is transforming the value of testing within organizations from an advantage and reputation shield to a never-ending activity that slows teams down.

The End of Testing as We Know It

It is well known that testing alone is useless if it disregards a well-structured and orchestrated environment management, followed by a surgical and methodical handle of test data. However, most teams only focus on the art of choosing the scenarios covered during testing. Code coverage, in this author's humble opinion, remains one of the most well-known key performance indicators around testing teams.

This is leading teams to opt for different approaches that enable them to articulate the testing process aided by solutions that deal with change. Modern toolsets and methods resource to data solutions as the vehicle to catch up with the digital transformation train.

The market is now inundated with machine learning-ready, artificial intelligence-powered, robotic automation-enabled, and the like, in favor to say that through the collection and projection of data in lower dimensions is possible to make inferences that make testing more reliable and less time-consuming. 

Trapped in Local Minimum: Your Environments

Teams are trapped in a local minimum with difficulties of coping with parallel streams of work, limited environments, and data constraints. If these sentences resonate with your feelings around testing in your organization, don't feel bad. You're not alone.

If your desire is to get out of this vicious loop, this article is the right one for you. Complexity theory, our grandparents, proverbs and fortune cookies teach us that breaking complex problems into smaller pieces is a good strategy for getting over them, remember? Divide and conquer.

Let's begin with the environment problem: the typical road to production is linked to the dev-test-acceptance-production (DTAP) highway. If we're honest, deploying software in 2020 is not a big thing. In fact, you should be deploying environments (maybe you already are) in the container wave.

Dev-Test-Acceptance-Production-Highway

Figure 1: Don't get stuck on the Dev-Test-Acceptance-Production highway.

No, the real problems are the external environments and data dependencies your system or application requires. Information solutions depend on an extensive catalog of services, either in-house or third parties, with a multitude of protocols and a great variety of data in each of them.

The problem in reality lies here, how to decouple the hard dependencies on data and external environments?

Welcome: Data Orchestration

If this term is new to you, I bet the only references you will find are about extract, transform, load (ETL) or cloud-related subjects. Completely outside of the context of this article.

In reality, data orchestration has been with us for a long time. Maybe under an umbrella of different names and products. 

Let's begin with the first, mocks and stubs. Most likely, you've heard these terms, as they refer to the early process of building interfaces that allow the continuation of business processes due to external dependencies. These dependencies are two-fold: the logic (environment/app) and the data they deliver to your system.

Developing mocks is useful for unit testing. It validates data contracts and protocols at a very low level. However, you can’t control the test environments by just using mocks in your data testing. You need more sophisticated technology.

Enterprise systems are built upon smaller systems with multiple competencies, geographies, and technologies. You may be in the hybrid world.

Two main stakeholders in your organization have an argument of providers. The safe option is not to put all your eggs in one basket.

In any case, customer relationship management systems, are linked to workflow management systems, to finance systems, and to audit systems, and so on and so on. It all depends on your line of business.

Coordinating data among all these systems requires skill and processes. It also requires the right tools.

Creating, conveying, and preserving data across a testing landscape is what I called data orchestration and is the only way I formerly recognize to produce valuable test results.

My partner is a bio engineer. (Un)fortunately for her, she does not deal with lines of code or bytes of software. She deals with cells. Those things that live or die under certain conditions. The only analogous thing I can think of is the bit rot concept in our world. Anyway...

Her experiments and testing scenarios are designed meticulously to preserve environment conditions. Data collection always comes in control, validation, and test samples. Software development is not as mature as life sciences, but don't you think it will be fantastic to advance our practices to those in which failure is not an option?

Okay, banners aside, how can we achieve the reproduction of valid scenarios at a great speed, collapsing the change entropy? Get ready. You're going to like this.

The Trifecta

If environments, data, and testing logic is the perfect combo, why is there no solution to play well with all of them. Well, there is. It's known as service virtualization. It doesn't help that the term virtualization is typically associated with the provisioning of virtual appliances, which falls into a hardware and scaling problem, not a testing or a change problem. Not to mention the to-MAY-toes/to-MAH-toes trap.

Service virtualization is an enterprise grade solution that reduces the dependencies on data and environments in complex enterprise systems.

The enterprise-grade tag may sound unnecessary. I do it not because I want to sound more profound (posts tend to typically use the terms at scale or best practice for no particular reason), but because if you:

  • Work in an environment where multiple people touch the same code base; 
  • Share and control versions of your test assets, and;
  • Want to integrate your solutions in your existing technology stack.

Then, it's not only about coverage of communication paradigms: request:response, publish:subscribe, and so on. It is about adoption, learning curves, training material, support, examples, roadmap, and cost.

Needless to say, technology adoption in IT is now like the bank robbery plan: it must come with a clear entry and exit strategy. Before you get in, you need to know how to get out.

Facilitating the orchestration of data across an enterprise landscape is not complex. It's just complicated. What this means is that it's not rocket science — it's purely a series of definite tasks that are solvable and well known. 

It's about versioning and controlling access to data sets.

It's about allocation and access controls, roles, mappings, and most of all, it's about adapting to change, not about trying to stop it.

Service virtualization is about understanding that replacing external services with smaller versions of them — and with their own smaller data sets — is the only requirement for stable testing suites. And for isolating changes.

Consolidating a Solution for ING Mortgages Netherlands Using Containers 

So testing has an ally against rapid change. And it's not vectorization of test cases or those old fashion orthogonality techniques that are a result of great research, not likely to be understood for feature-greedy stakeholders.

If you put the data keyword in the mix instead of those coined old-fashion terms, there are less probabilities of resistance for a stakeholder to argue about the value of data consolidation and control, and capitalization on data assets, as there was before.

IOVIO together with Parasoft brought those concepts alive for ING in their Mortgages tribe. They consolidated a solution that containerized services and made 60% of the complex mortgage ecosystem testable — in complete isolation with data dependencies and environments in full control.

Details about how IOVIO and Parasoft helped ING on their journey to conquer software development testing challenges in the face of change are coming soon. In the meantime, check out Parasoft's recorded webinar about implementing service virtualization for financial services If you are interested to know more, or want to reach out, use the contacts details below.

Webinar: Implement Service Virtualization for Financial Services

Stay up to date