Q&A with Max Saperstone from Coveros: Part One - Getting Started with Test Automation
In this set of three blog posts we get an inside look into how to build an effective test strategy and how to use test automation as part of that strategy. I interview Max Saperstone, Director of Test Automation at Coveros. Max is an experienced test engineer with a focus on test automation within the CI/CD process. He lends a hand to various clients to help them get their testing and automation efforts off the ground. Max is also an experienced and certified agile developer and is sought after for speaking engagements at testing and Agile developers' conferences. We're lucky to have Max's experience at this juncture to discuss topics close to our hearts here at Parasoft.
This is part one of a three-part series which discusses Max's thoughts on getting started with test automation. One thing is clear, Max likes to help his clients take a step back and consider the big picture before diving into test automation. Helping his clients answer important questions like "why am I automating testing?" so they can set clear goals for their efforts. We also chat briefly about API testing and Max seems to be on that same page as us - API testing is critical important. Let's look at what Max has to say:
Getting Started with Test Automation
Mark Lambert: Hi Max, it’s great to be with you again. I know your role at Coveros is all about helping and assisting clients with effective test automation strategies. Can you tell us what you think is an effective test automation strategy? Where should a team start?
Max Saperstone: Hi Mark, that's a great question! It's interesting because, as you know, my specialty is automation, and despite that, when a team is starting out, my advice is usually, “Don't just dive into automation.”
Really the best place to start is to look at QA as a whole. So the first thing to understand is what “quality as a whole” means for your project, and how you’re going to verify it. Only after you know the answer to those questions, can you really dive into deciding what to automate, and what you don't want to automate.
For me, this is always one of the biggest challenges. I see a lot of teams diving in to start writing Selenium scripts or QTP scripts, and they have a whole bunch of “stuff.” But ultimately, how is that stuff being used? How do you deal with test results? How do you decide when to ship the product?
My recommendation is usually to take a step back and figure out what you need to be verifying – and how to do that. There's this really cool method out there called the MSCW.
Mark Lambert: What's the MSCW method?
Max Saperstone: The MSCW method is really just an acronym. It's what Must you automate, what Should you automate, what Could you automate, and what Won't you automate. The idea is to actually put some time and thought into your automation strategy, to figure out where the biggest bang for your buck is.
For me, that always goes back to ROI. What are the tests that are running consistently? What are the really high value areas of your application that you just can't allow anything to go wrong in? What will impact the users the most? You start with what will always fall under your “musts,” and are always falling under your “shoulds.”
Then, you get into some other areas of your “coulds” and “wants.” For example, testing usability, that's a really difficult and hard thing to automate: how do you tell a machine what feels right versus what doesn't?
Another area that’s difficult to automate are third-party integrations. For example, say you have a FitBit integration, you might be able automate it, eventually. But that's going to take weeks or months to do. Is it really worth that amount of time, to get into automating that?
When I'm writing a test strategy, I spend time figuring out my test plan at this high level. What are the areas of the application that I really care about? What are the areas that are going to be easy to automate? I usually start with that.
Mark Lambert: So how do you organize this?
Max Saperstone: Of course, this is really still talking from a functional level. As soon as you take a step back and consider your strategy in terms of the testing pyramid and the different roles involved in quality, because it’s not just testers. Developers should be writing unit and integration tests at the lower part of the testing pyramid. At a certain point, the testers take over, like the tip of an iceberg. Underneath the surface, you want to make sure that all that code is tested with unit tests - make sure that the code does what the developer says the code should do. The next layer up is the integration tests, which make sure that the different parts of the application actually work the way that the other parts think they're going to. Finally, you have the testers, they sit at the top of the iceberg, and they really want to make sure that the application as a whole, does whatever the end client actually wants.
If you don't have those two underlying parts done properly - where automation pays off and it's fast and easy at these low levels - errors at the top level are hard to debug and fix. You have no idea. "Is it a functional issue? Is it a component issue? Or is it a code issue?" But if you know that all those other things are working properly, it makes it very easy to chase down these issues.
However, if you know that all unit and integration tests have passed, then it's quicker to diagnose failures as probable functional issues. The opposite situation, with a poor automation strategy, is a struggle to pinpoint the root cause, a lot of debugging, A good automation strategy problem triaging a lot simpler.
Mark Lambert: Here at Parasoft, we’ve been talking about API testing for close to two decades now, yet, the adoption of API testing is still new to many people. This might be due to the hidden nature of APIs, it's in the layer between the UI and the code and many folks don't see it. What do you think is the way forward? How do organizations really leverage API testing in the most effective way possible?
Max Saperstone: That's a great question. I love API testing because coming from a tester, who doesn't necessarily have access to all of the code, it's a great way to get a lot of testing done from a black box perspective. Just because I don't know what the code does, doesn't mean that I don't have good insight into the API.
Hopefully, I can see what inputs an API is expecting and outputs that it's generating, whether there's a WSDL or swagger document associated with it. API testing allows me to do testing in a very rapid fashion because it is data driven at that point. I have an endpoint, I throw as many different combinations of inputs that I think are valid, and check all of the different outputs from it. I don't necessarily really need to know that much about code and there are a whole bunch of really good frameworks out there that handle that for me.
So, from a tester perspective, that's absolutely why I love API testing. Plus, they're fast and usually not brittle. If you have an organization that sets up contracts and has well-defined endpoints, which are not going to be changed often, there's very little maintenance that needs to be done on API tests and they give you a lot of information about the system.
Mark Lambert: I think what you just touched upon, about API tests being fast and stable, is really why they've become very valuable. Also, they are a great communication mechanism between the testers and the developers within an organization.
Max Saperstone: Absolutely. Typically, when I talk about integration testing to organizations, API testing is a large portion of it. They're fast, they give you a lot of really valuable information and are a lot more stable than UI tests.
In the next post, we talk to Max about building a test strategy and his take on the testing pyramid.
VP of Products at Parasoft, Mark is responsible for ensuring that Parasoft solutions deliver real value to the organizations adopting them. Mark has been with Parasoft since 2004, working with a broad cross-section of Global 2000 customers, from specific technology implementations to broader SDLC process improvement initiatives.