Welcome to the first session of Testing 1-2-3! This series features conversations with software testing industry leaders on a broad spectrum of software testing topics—DevOps, Agile, IoT testing, performance testing, unit testing, test data, service virtualization, the new role of software testing, the business impact of quality, and more.
In this inaugural edition of Testing 1-2-3, Wayne Ariola chats with Theresa Lanowitz, Founder and Industry Technology Analyst at voke, inc. Listen in on their conversation about testing topics such as:
- How Agile, DevOps, and microservices are bringing an increased emphasis on quality
- Soon-to-be released research on release automation
- The ideal role for a software tester in the modern SDLC
- The (sad) state of automation today
- Why service virtualization and other lifecycle virtualization technologies are essential for initiatives such as Agile, DevOps, and anything positioned as “continuous”
Wayne: Based on conversations with our customers, it seems that the SDLC is facing the most disruption ever—from compliance, to Agile and DevOps, to microservices, and on and on. It’s all predicated on the idea of more and more, faster and faster. Are you seeing that as well?
Theresa: There's definitely a tremendous amount of disruption, and it seems that in addition to worrying about "more and more, faster and faster," people are increasingly concerned about quality. At voke, we've been surveying a broad field of enterprise IT and software professionals every year since 2006. In 2016, for the first time in 10 years, our research found that people are more concerned about quality than speeding time to market or controlling costs.
Wayne: Interesting. Where can people access this research?
Why "Are We Done Testing?" is the Wrong Question for Continuous Testing
Wayne: We've found that most people are still testing from the bottom up, from the perspective of a user story. They're asking "Are we done testing"? In our Continuous Testing approach, we’ve proposed that this is the wrong question. Instead, we should be asking "Does the release candidate have an acceptable level of risk?" Yet, when we raised this issue with IT leaders across the industry, few were able to answer this question. They weren't able to measure risk and there was a tremendous gap between testing activities and really understanding the business risk of the application. It seems like you're saying that with release automation and this need for quality, you need to be able to get to that perspective much, much faster—and automate it.
Theresa: Yes, you need to be able to get to that business answer much more quickly. Testers have long been suffering as "just the people who identify bugs." They need to be the people responsible for exposing business risk. They need to be able to say that as a result of this risk, you're not going to be able to board people on your airplane faster or you're going to lose healthcare subscribers. It's not "Is the testing done?" It's "Are we doing the right things to test? Are we testing the right things? Have we built in things such as non-functionality requirements—things like performance, security, and all the other –ities?"
Test Automation, Non-Functional Requirements, and Security Testing
Wayne: Do you see people doing much in terms of measuring non-functional requirements today?
Theresa: In our most-recent Release Management research, we found that teams who are releasing faster are not doing much automated testing. Instead, they're still doing a lot of manual testing. Even with something as basic as functional testing—which has been around for a long, long time—only 53% of organizations report that they're automating functional testing. And only 22% claimed that they're performing automated security testing.
Wayne: Is this penetration testing from the outside in, or secure coding practices from the bottom up?
Theresa: All of the above. Only 22% are doing any type of automated security testing. As an industry, it seems we're ignoring performance and ignoring security because we want to release faster. These are things that you have to do upfront from an architectural perspective. You can't build in performance later—you no longer own that infrastructure, you can't just throw hardware at it. You can't build in security later because by the time you identify that security vulnerability, the architecture is already set.
The Tester's Role as "Protector of the Brand"
Wayne: One of things I appreciate most about voke is that you are the champions of the business when it comes to the tester. From what I've read of your research, you really talk about protecting the brand and the tester being the advocate for that brand. I believe this 100%. But do you find that there is a massive gap between today's testers and voke's vision of the tester as the protector of the brand?
Theresa: I think there is. As an analyst firm, we're fierce advocates for testers and we believe that testers are very important in the SDLC. Testers bring a completely different viewpoint. They have different personalities than developers and operations professionals—and that's great. You need all of those perspectives to deliver high quality software that will benefit the business. But I think there's really a gap between where the tester is in most organizations and their ability to go and really help the business protect that brand. That's one of the things the testers need to take on themselves.
We believe that testers need to become one of 3 things. They can be technology experts that are extremely proficient with the newer technologies out there. They can be customer advocates, really understanding the business of the business that they're supporting. Or, they can be a change agent: someone who may not have a testing background, but understands how to advocate for testing across the SDLC, up higher in the organization to the CIO level, the CTO level, even the CEO level.
Wayne: So understanding more from a process perspective as well?
Theresa: Right. It's interesting, I talk to some people who say "Our QA organization is relatively new, but I have the ear of the CEO because the CEO really understands the importance of software quality." I always point to July 8, 2015: the day of reckoning for the software tester. We had 3 major outages in the US: United Airlines, the New York Stock Exchange and Wall Street Journal. I think that is the day the pendulum started swinging back to the importance of software quality.
Extreme Automation, Service Virtualization, and Lifecycle Virtualization
Wayne: voke has a vast library of valuable research. One of the most interesting pieces is about the evolution of extreme automation and lifecycle virtualization as a key enabler for this extreme automation. Can you briefly talk about extreme automation and how lifecycle virtualization is related to that?
Theresa: Extreme automation is exactly as it sounds: automating everything that you can possibly can, eliminating all those human interactions, in order to ensure that the defects you found in pre-production don't make it into production. It's not just about automating the testing process, it's about automating everything. One of the things we learned is that developers do not like to do automated unit testing. One of our respondents' comments on this was "Some things never change."
Shifting focus to lifecycle virtualization, lifecycle virtualization involves bringing virtualization technology from the production area (the data center) to the pre-production portion of the SDLC. If testers and developers are participating in any type of movement that compels them to go faster and/or use anything positioned as "continuous", you have to use these lifecycle virtualization technologies—things like virtual and cloud-based test labs, service virtualization, test data virtualization, network virtualization, and defect virtualization. You have to be using these technologies, or you won't be able to release faster, with quality. You're not going to be able to meet business demands, and you'll end up with a lot problems.
Wayne: I always like the juxtaposition between extreme automation and the enabling technology being lifecycle virtualization. From a Parasoft perspective, we find there are two main value propositions associated with service virtualization. One is access: being able to access a dependency in a test scenario or pre-production scenario. The other (often overlooked) benefit is simulation: moving towards the idea that testing needs to aim for failure, understand where the breakpoint is, then ask the business if this is okay. You don't just confirm that the user story seems to be working, check it off, and move on. Rather, you try to understand where the break points are (because that can directly impact revenue and brand) and then have the business decide whether or not that's acceptable at this point in time.
Theresa: Yes, access to dependencies that are unavailable of incomplete is important. But so is the simulation of dependencies you may not own, dependencies with fee-based access, systems like mainframes that are rarely available for testing or only available at awful times. And then consider how applications have gone beyond the four walls of the organization: everything is global, everything is mobile. If you no longer own the infrastructure, you need to test the network and the application together. We now need to simulate the network, test for packet loss, jitter, latency at all layers of the transaction.
Read—and Contribute to—voke's Research
Wayne: Great, thanks Theresa. Where can people read and participate in voke's research?
Theresa: You can access our research on lifecycle virtualization, release automation, and more at http://www.vokeinc.com/. We're currently running a survey on security operations automation, and would really like to encourage everyone in the industry to participate. If you complete the survey, we'll be sure to give you a complimentary copy of the resulting report.
Download voke Research Report on Service Virtualization and Lifecycle Virtualization
As Theresa and Wayne noted in their conversation, many leading organizations have already learned that service virtualization is essential for enabling the extreme automation required for initiatives such as Agile, DevOps, and anything positioned as “continuous.” voke's latest research on service virtualization and extreme automation confirms this.
voke's just-published "Market Mover Array Report: Lifecycle Virtualization" report highlights the need to bring service virtualization technology and other lifecycle virtualization technologies to the pre-production portion of the software lifecycle to provide developers and testers with production-like environments on-demand. The report highlights how such solutions:
- Remove the constraint of wait times for developers and testers in the pre-production portion of the software lifecycle
- Provide developers and testers with labs, services, data, and networks as close to production as possible
- Help software teams deliver higher quality software in less time
The voke Market Mover Array also identifies notable vendors in this market. After analyzing 10 vendors versus 14 categories, the results were charted and placed into one of four bands (shown below). According to voke, a "transformational" vendor is changing the tone and direction of the market, and is typically challenging the pivotal vendors to innovate either in terms of technology or marketing acumen.