Go back to blog listing

Q&A with Max Saperstone from Coveros: Part Two - Building a Test Strategy

Building a Test Strategy with Max Saperstone

In this part two of my conversation (part one is here) with Max Saperstone, Director of Test Automation at Coveros, we get an inside look into how to build an effective test strategy and his thoughts on commercial test automation tools.

Max has found the same situation at his clients as we do here at Parasoft. Many organizations are facing challenges with testing and how they're allocating resources to each phase of testing. Max follows up with a discussion on his take on commercial tools with an added discussion around codeless testing.

Test Strategy and the Testing Pyramid

Mark Lambert: Having an API testing strategy in place means you can find problems very early in the process and isolating them very early is obviously what the industry is moving towards. But when I talk to people, they describe their test strategy as more like an ice cream cone or an inverted pyramid. Is that what you see as well? And if so, what are the challenges that you see that software teams face with that ice cream cone type of situation?

Max Saperstone: I've seen all sorts of variations on the inverted pyramid. Some are almost an hourglass, where there's a lot of activity at the bottom and a lot on the top, and really not a lot in the middle.

Really the problem comes down to costs and speed and honestly, speed flows into costs as well. Unit tests, are cheap to run and should be run in a matter of seconds. These unit tests should really be making up the bottom of your pyramid. They should be relatively cheap to create, and relatively cheap to maintain. If they're not any of those things, fast and maintainable etc., then they're either not actually unit tests or you're just doing a bad job with them.

A lot of what I do is work together with the developers, testers, and requirement analysts. I need to understand what is really needed to be tested in different areas, and everyone getting on the same page. For example, when you're testing a database, it's not a unit test anymore, that’s really component testing. I see teams who have a bunch of unit tests but with lots of integration tests mixed in as well. Poor isolation and complex unit tests slows down the unit testing process.

The reason you want unit testing to be fast is to get feedback rapidly, even before the code compiles. You want to make sure that the code at least does what the developer wants it to. Because if it isn't doing what the developer wants it to, there's no reason in moving any further.

Functional tests usually in the form of UI tests, are inherently brittle. They are more difficult to maintain. So, why would a team want more of them? I go into different organizations and they say, "Well, automation isn't working for us." And I say, "All right. Well, let's look at your tests." What I find is a combination of three things: They have too many UI tests, they're too difficult to maintain, and they're poorly written. I tell them they can trim down the number of UI tests because of all these other layers of tests (unit and component) supporting them and when they improve those, I can them help them write better functional test versus what they have today.

Mark Lambert: You mentioned something I want to go back to, because I've been seeing this problem since I joined Parasoft, 15 years ago. When somebody says, "Yes, I'm doing unit testing" but when you get in there, they're actually not doing unit testing, they're actually higher up on the pyramid. As you said, just using a unit testing framework doesn’t necessarily mean they’re doing unit tests. Why do you think that is? What is the challenge for somebody to create a correct or real unit test?

Max Saperstone: This is definitely not an isolated incident that I've seen in one or two places. Honestly, it's hard to write a good unit test, as opposed to an integration or system test. If you want to do good unit tests, you have to mock everything the unit depends on.

Writing mocks is not easy. It requires more work; it requires more libraries. Developers actually struggle with this; they may not know how to do it or may not have the time to do it. For those reasons, they might decide to take a shortcut to quickly connect a database, for example, for what is intended to be a short-term solution. A week later, everyone else is using that same code and it becomes too late to change it. The response to this when I point out these issues is "Well, we’re writing tests. That's better than nothing." And it is, but these teams quickly get into this spot where their tests aren't as effective as they could be.

I don't actually think it is ignorance for the most part. I really think on the majority of cases, it's a time crunch. When it comes down to it, and I see this at every single organization, management cares much more about getting the product out the door, than they do about doing things right, initially. So, these teams build up technical debt and, in the short term, there's no problem with it. However, the real problem lies in the fact when they don't get back and reduce the debt, when they go to  refactor code or change something, these complex, non-isolated “unit test” become more and more brittle, and more things start to break. Even though initially they had good intentions, this build-up of poor tests and technical debt causes the problems I see.

Test Automation Tools

Mark Lambert:  As you’re helping clients deploy their test strategy and you've determined where automation is to be applied, they have to make some technology decisions. For example, they need to decide if they are going to use open source or commercial solutions? Or do they just build their own framework? Where do you recommend they start the decision making process?

Max Saperstone: That's a great question, Mark. And that's actually one of the questions I get asked the most: What tool should I use? What framework should I use? The client is usually about to start doing automation, or getting into API testing. Unfortunately, the answer that I always give is, "Oh, I don't know," because I don't like the tool first approach.

I really like for my clients to take a step back, and think about what their requirements are. What are they looking at from a test results analysis and traceability perspective? What is it that they’re trying to accomplish? What levels of the test pyramid do they want the tools to be supporting? Is it just testing at the UI, API, unit tests? What's the overall testing strategy?

Once I get all of those questions answered. then I'm going investigate some of the frameworks and tools that are out there in the market and make decisions based on researching the client’s requirements.  I always suggest going with an open source tool first. I'm a big fan of trying to do things as agile as possible, and a big part of that is experimentation. Sometimes, as you know, failure is a part of experimentation. You might find the right tool, and it may work fine initially, or it might not. If you have to pay money for these tools, experimentation can get expensive and you might not get what you want.

After experimenting with open source tools, I recommend my clients look at commercial tools that have free trial periods to try and check things out. At this point, I recommend that they start putting in some more analysis into commercial solutions.

When considering open source tools, one of the big things to consider is support. There are a different open source frameworks and tools out there and just because they're free, doesn't mean that they're garbage, but, it also doesn't mean they're good either. For example, there was a stigma surrounding Selenium and people questioned if a free tool can be any good. Now there's a huge community that contributes to it and it's the number one automation tool for UI testing despite the fact that you can just go ahead and download it for free.

Being aware of what support is out there is important for open source tools. It’s also one of the key things that differentiates good open source projects; the community support. You ask questions, people get answers relatively quickly. Selenium has a host of people dedicated to answering questions there. Community support like that, whether they're paid or free tools, is really important.

A danger with open source tools is the possibility of getting stuck with a bug or usage problem and the project gets abandoned, although this can happen with commercial tools as well. However, if I haven’t made my tests portable, that's a problem that I’ve created.

All of these different tools and approaches have trade-offs when you look at them. With the appropriate level of support and backing, to me, that always makes the open source community a little bit more valuable.

Commercial Tools

Mark Lambert: If there are concerns with vendor lock-in, are you looking for technology which can easily port tests between different frameworks? In other words, is a critical aspect of evaluating commercial solutions the ability to move between tools and making sure you're not locked into their platform?

Max Saperstone:  Yes, it really depends. For a lot of vendors, interchangeability support is a good thing and for some tools, it works really well. However, when you're initially doing first experimentation, you don't necessarily want to start off with commercial tools. It's painful if I have to rewrite six different test suites just to try out six different pieces of software.

Mark Lambert: So, open source first until you hit a wall, then look for something that can help you go beyond that wall maybe?

Max Saperstone: Probably. There are lot of different solutions out there that have a free version or a “freemium” model, where you pay for additional features on top. There are also paid versions on top of that as well. A lot of these tools I really like, because once I look into them, I realize I can do all these cool things. For me, a lot of times it’s worth it because if the tool has all of the right features and I can do a lot of my development beforehand, I only need one or two licenses to run the tools during builds. That's not too bad.

Again, it also depends on the team’s technical level. The nice thing about paid products is that they typically have support ready for you. if the team is not as technically proficient with test tools and they need more support, it’s available. When it comes to Parasoft’s products, support is one of the things customers are paying for, which makes a lot of sense strategically.

Codeless Testing

Mark Lambert:  Addressing your comment on non-technical users, for example, someone that's not comfortable with coding for Selenium, would codeless test tools be a solution? What does that codeless testing mean to you? What are your thoughts?

Max Saperstone:  I've been seeing codeless test automation tools around for about five years at this point. I think back when I was first made aware of them, they were really slick. I like that they can make a lot of things relatively simple and some of them go beyond just the codeless aspect. Some tools allow you to dive in when needed and start writing things in Java or Python, for example. Having the capability for codeless testing and the ability to add code when necessary is important when you have the less technical teams.

In my mind, every single type of tool has a cost. You have to pay more for people who have a coding background and less money for less technical testers. However, this might be offset by technology and paying a little more towards vendor support. So, I think codeless testing really offers a great solution for that.

The big challenge that I see is there is a lot of different players in this space. I haven't really seen one vendor pull ahead of another. Vendors seem to still have a small foothold. While this means there is definitely a growth market for test automation tools, these tools might not solve the main problems that we have in the test automation space.

I mentioned earlier that there were two things that I typically see as problems: Teams are spending too much time maintaining their tests because they're brittle and they break. The other is people are writing bad tests. These have been what I see as the major pain points of test automation in general. I don't think that codeless automation actually addresses these issues. They do make it a lot easier to write tests, but I don't think that is the most common problem. These tools make it easier to write the same bad test, potentially.

A lot of these issues and lack of success with automation go back to the lack of a testing strategy. I think tools and automation hasn't really exploded yet, they definitely are solving problems, but they're not solving the biggest problem in the space.


In the next post, we talk to Max about his experiences with failures and successes in testing and test automation.

Start using next-generation software testing technologies in your organization

Stay up to date