Scaling service virtualization: one person’s journey

Apr 6, 2018

When you decide it’s time to, say, go on a diet, you might ask different people that have been successful in weight loss what they did to get healthy. And you will probably get an answer like, “You need to stop drinking alcohol, start eating kale, go to bed at 8 o’clock, and walk 5 miles every day!” And while it might make sense to embrace all of these activities in order to adopt a healthy lifestyle, if you try to adopt them all at once on day one, there is a 99% chance you’re going to fail. Instead, you need to approach this modified lifestyle slowly, step by step. Add an extra exercise here, make a healthy choice for food there... slowly get yourself up to a level where you can really diet like a pro.

Service virtualization is no different. I’ve worked with numerous customers over the years, helping them adopt service virtualization, and I find the same thing. Most organizations want to do the big bang approach, bringing in a fully-deployed solution that spans across multiple teams and is integrated as a part of their continuous delivery pipeline. And while, yes, all of those things are essential to fully realizing the potential return on investment that service virtualization can give you, if you try to do all of those things on day one, there is a 99% chance you are going to fail with your service virtualization initiative. So what do you do?

Let me take you on a journey. We’re going to follow one individual, growing from a single free service virtualization tool all the way to a full deployment of service virtualization, integrated as a part of their continuous delivery pipeline. The following is based on a true story.

Scaling SV Blog Image 1.tif

Meet Sally

Meet Sally the developer. Sally was tasked with creating Microservices, and as she started developing the individual Microservices, she had a problem writing her unit tests because her application was highly connected with a series of other Microservices.

Being smart and able to develop at a much faster rate than her colleagues, Sally reached a point where she needed to start testing the interaction between these different pieces. Sally initially started using mocks to isolate herself during the testing phase but she was spending a lot of time developing the type of logic that she needed to build into those systems because the actual applications she was stubbing out were somewhat complex.

So she started doing some initial research on service virtualization and found that you could create a much more complicated virtual service in a very short amount of time. So she downloaded the free version of Parasoft Virtualize, which provided the service virtualization she needed. She started getting really good at creating virtualized services and was able to easily modify them as the actual services were undergoing change. As a result, she could do all of her testing and development in a completely isolated environment.

As she was discussing these advantages with some of her coworkers, they too wanted to leverage some of the services she had created because they were common services between the different developers and they could simply point their applications at Sally’s machine and reap the benefits. So they too downloaded the free Community Edition of Parasoft Virtualize, and lived quite successfully this way for a fair bit of time -- making new services, adjusting those services, and consuming those services all from their free desktops. This allowed the team to make significant progress for development and testing because they were able to reduce a lot of the bottlenecks that were present in the environment. The team became popular for their agility and got all of the best projects.

Meet Sally's Management

One day, Sally’s team was approached by management, who were curious about the service virtualization solution that the team was using that helped them build and test the applications more rapidly. They wanted to have a discussion around its practical application in the larger environment. There had been some buzz around outages in the integration and production environments caused by legacy applications. The applications relied on a series of Oracle databases as well as a complex ESB and a mainframe. Those systems were difficult to test against for a series of reasons:

  • The databases rarely had the correct data. There were ETL jobs that would kick off and do data refreshes but it didn’t happen but once a week so teams often had to scramble to request updated data in those databases.
  • Some of the other databases could have been cloned, but the licensing costs for getting additional hardware was prohibitive and never really got pushed through.
  • The ESB was undergoing changes as a side effect of the microservice transformation and many of the services it exposed were unavailable or unstable.
  • To save costs, the organization would take the mainframe off-line during the evening hours. This made it so that offshore testing couldn’t take place and in the morning when the automation kicked in, a lot of the development teams would actually find out that last night’s code check-ins had broken, when they interfaced with the mainframe, losing an entire 12 hours in their cycle.

The management team was interested in talking with Sally and her team to see if the service virtualization solution they had been using as a part of their development and test cycles would help these particular cases. Sally and her team were able to show that it was easy to simulate the services behind the ESB because they were basic REST and SOAP services, and a couple of JMS and MQ with homegrown XML. But to really tackle the legacy hardware, they needed to supercharge their virtualization desktop, so they went ahead and got the full version of Parasoft Virtualize.

Scaling SV Blog Image 2.tif

They were able to easily apply virtualization for the different challenges present in the use cases described by management. It took a few days to make sure that the virtual services satisfied all the different use cases, but they were able to unblock all of the challenges that they had in those environments. This was one of the key turning points for the service virtualization movement in Sally’s organization because they were able to leverage Sally’s team's expertise with basic service virtualization to tackle more complicated challenges that actually had real costs associated with them. The efforts bubbled up to the management teams and they decided that it was time to build a service virtualization team within the organization that could be leveraged to build virtual services whenever these challenges would arise.

The New Virtualization Team

Sally was a natural fit to lead the team and so they established “Team Virtual Ninjas.” At this point, it became important to start building a process around onboarding virtualization initiatives as well as creating acceptance criteria so that the team did not become a bottleneck. Governance became an important part of the conversation. The team set up a series of roles and responsibilities to ensure each virtualization project was successful. There were 5 roles:

  1. The Tester. Whenever you create a virtual service, you need to have a reason for virtualizing the particular component. Quite often the team would get requests to simulate an unreliable application in the environment. When they had the initial conversation with the requester, they would ask “What is it that you can’t do?” This question was critical because you must have clearly defined acceptance criteria in order to have a definition of "done" for a virtual asset. The tester becomes an essential part of this process because they can define which test cases need to execute successfully and the virtualization team will know they’ve created a successful virtual asset when that is accomplished.
  2. The Developer. Virtual assets can be created with little to no understanding of the application that you are virtualizing, but in order to create a virtual service with a minimal amount of effort, it helps to have domain knowledge about the application you are simulating. So the developer became an essential part of the virtual asset creation process, giving explanations of how the services functioned so that when the virtual services were created, there was an understanding of why it was functioning the way that it did.
  3. Test Data Management. It’s arguable that many service virtualization challenges are actually test data challenges at their core, so test data management teams become important when building virtual assets. Most virtual assets are created by record and playback, so when the test cases are identified and the behavior is agreed-upon, it's important that the environment that you’re going to record has the proper test data at the time of recording. So while test data management teams had a minimal role in the virtualization process itself, it was crucial to bring them into the process before creating the initial offering.
  4. Operations. Virtual services replicate real services, so if you’ve created the virtual service right, a user may not actually know they’re hitting simulation. As a result, the virtual service needs to be available at an endpoint defined in the environment where the actual service is. This can often be a roadblock to the virtualization process because many people won’t have access to reconfigure the necessary connections to point the application at the virtual services endpoint. Parasoft Virtualize uses a mechanism called the proxy, which allows a service to communicate through a man in the middle, that could be controlled by Sally’s team. But getting that initial connection set was the responsibility of the ops team. Identifying the operations resource ahead of time and making the upfront contract that this initiative would be taking place was the best way to ensure that when it came time to actually connect all the pipes together the team would be ready and would be able to understand what was taking place.
  5. Leadership. In order for any service virtualization project to be successful, management has to buy in. This wasn't difficult in Sally’s case because they started from the ground up and had proved significant value, but it was important to maintain continued focus from leadership for the team to function productively.

Deployment

Once the governance process had been established, it was time to think about the deployment. Each member of the virtual ninjas had a Parasoft Virtualize desktop software. They would create the virtual services onto the desktop and then make them available to users. This started to become complicated as the team became more popular, because if one of the team members had to shut down the machine or go on vacation, it would affect users hitting the virtual services. Sally made the decision that it was time to upgrade their deployment architecture once again, so they procured a virtualization staging server.

Scaling SV Blog Image 3

This allowed each member of the team to join forces and share their virtual assets. The server was “always on” and acted as a virtual artifact library. The server was connected to source control, so as different versions of the services were deployed to the server, they would automatically be checked in. This allowed the team to have a central point of truth for all virtual assets, and no one had to guess where the most recent version was.

The team happily hummed along for several months solving big and meaningful challenges for the organization. It had grown in size and added a few more members. In order to boost the team's visibility and awareness (and also increase the size of their budget), Sally had implemented the “Hoo-Rah” program. Every time the team built something with quantifiable ROI, they kept track of the gains and sent out a public email explaining what they had done and which teams had benefited. Examples of these "hoo-rah"s were:

  • The team simulated the stormtrooper SOAP services and the extension table in the main Oracle database and enabled an automated process to provision and test the payments service’s 111 combinations. This increased test throughput and automated test execution that saved them $27,950 for a project cycle.
  • The team was able to simplify a cloud migration initiative by simulating services that were not ready to be moved into the cloud. This allowed the transformation to happen 2 weeks ahead of schedule because they were able to do validation in phases but still function even though pieces were missing. This saved them $45,875 in man hours for the project.
  • The team proactively managed a 3rd party service change by creating a virtual representation of the new service, delivering access to dev/test 2-6 weeks earlier. This change management reduced unplanned downtime associated with 3rd party services (~30%) for a program save of $27,146.
  • The team simulated the member search service on the Mainframe that provided unique members when calling for accounts, significantly simplifying the test requirements for the process flow. Teams now have control over the data for the mainframe and database, and they can insert any type of behavior they’re looking for. This is projected to significantly reduce 15,000 hours of unplanned outages from middleware.
  • The team successfully simulated 112 services required for the master key entry regression scenario. This allowed the team to deploy the virtual services around the key entry service and reduced the need for a physical performance environment, saving the organization $123,654 in capital expenditure costs designated for procuring an additional performance environment.
These “hoo-rah” emails were vital for bringing additional teams into the fold, but also helped business understand the importance of service virtualization to the test automation process.

Getting Value from Negative Simulation

Then, one evening in late summer, a member of the security team was doing an audit of a critical application and discovered a potential attack vector into the system that could be exploited and cause not only sensitive customer data to be leaked but also force the organization out of compliance. If not remediated quickly, the organization would be forced to update the compliance committee and start the penalty process.

The team realized that if they could remediate the defect within a specific time window, they could push the changes to the production environment and all would be well. The challenge was that in order to successfully reproduce the issue, they had to put many of their third-party payment systems into a state where they would provide back various error conditions and intentionally leak PII or customer data. The team did not have the ability to force these systems, which were outside of their control, into the state that they needed to in order to expose the defect and validate the fixes they would put in place. Sally was called in the middle of the evening and was asked to go to work. The team made quick work of reusing existing virtual services that they had created for these third-party payment systems, and putting them into a state where they would start returning negative behavior. Because the application didn’t have to be redeployed, they could simply modify the behavior as the developers were making their changes and flush out all of the different combinations that led to the potential exploit. Needless to say, the team was successful in delivering a hot patch into production that saved the company millions.

The Virtualization Center of Excellence

That was all the team needed to be thrust into the spotlight. They were now known as the “virtualization center of excellence,” and they were popular. Many developers and testers who had heard of the heroic efforts were contacting the team and asking for access to the software so they could make their own prototypes and validate negative and positive scenarios. Sally had the infrastructure to support it but she didn’t necessarily need to give them the heavy hammer that was the professional desktop version, so the team upgraded their infrastructure again and included the Continuous Testing Platform, a centralized dashboard that gave access to any user in the organization and enabled them to create virtual services and test cases right from their browser.

Scaling SV Blog Image 4

This evolution of the deployment created a “hybrid model,” in which individual team members could act in a federated way, creating their own virtual services for their needs -- accessing them, modifying them, and when it came time to integrate those virtual services into the larger architecture, they had a mechanism to collaborate with the virtualization center of excellence. The team could add additional servers to support the load, as well as snap in performance servers when the performance team got on board.

Scaling SV Blog Image 5

At this point, Sally had a comprehensive library of virtual assets, as well as corresponding automated test cases. She had a library of test data feeding into both of these test artifacts. The majority of the actual service creation was taking place by the individual teams, and Sally’s team was primarily responsible for orchestrating all of those different virtual services into an “environment.” The environment was really just a template of virtual assets, test cases, and test data built into a specific configuration in order to satisfy a testing initiative. They built many of these environment templates and aligned them to the different applications in the organization.

Whenever an application needed to be tested and the real environment wouldn’t suffice, the virtualization center of excellence would spin off an environment of different virtual services and allow the team members to test. The teams became more and more reliant on virtual services as a part of their test execution and it was a natural transition into the continuous delivery pipeline.

The final and fully-realized deployment for virtualization at Sally’s organization looked like this:

Fully-realized virtualization deployment

Individual team members would create the virtual services and test cases within their browser. If the virtual services needed to be updated or additional logic needed to be added, the virtualization COE would handle that with their professional desktops. The virtual services and test cases would then be combined inside of the Continuous Testing Platform, and when those environments needed to be available, their build system would call the Continuous Testing Platform and deploy them either into the cloud or onto dedicated servers. The automated test cases would then kick off, the results would be sent to their aggregated dashboard, and the dynamic environment would be destroyed.

Getting to a Fully-Realized Virtualization Deployment

True continuous testing enabled by service virtualization isn’t something that happens overnight. As you can tell from the story, a series of compelling events had led the organization down the path of creating a centralized virtualization team and integrating that into their continuous delivery pipeline.

This story is real, and it's all possible with service virtualization, but it requires the organization to be bought in and start from the ground up, just like Sally did (by the way she’s now on the executive board). This is the best way to bring service virtualization into your organization -- step by step, utilized where it’s most valuable. Everybody’s journey will be different, but the end result should be the same: lowering the total cost of testing and gaining the power to truly control your test automation process.

Get Parasoft Virtualize Community Edition

Service Virtualization