At a recent deployment, I was working with a customer to build an API testing strategy when out of the blue I was asked, “What is API testing?” I realized then that API testing is surprisingly challenging to describe, and even when you do describe it, it tends to sound boring and complicated.
Well, I’m here to tell you that API testing is NOT boring or complicated. It is actually very fun and powerful, and understanding it in a meaningful way unlocks the power to create a truly effective testing strategy.
What is an API and why do you use them?
In service development, an Application Program Interface (API) is a way for various applications to communicate with each other using a common language, often defined by a contract. Examples of these would be a Swagger document for RESTful services or a WSDL for SOAP services. Even databases have an interface language, i.e. SQL.
Much like how a UI allows a human to interact with an application, APIs allow machines to communicate with each other efficiently.
APIs are great because they represent building blocks that developers can use to easily assemble all sorts of interactions without having to rewrite an interface every time they need machines to communicate. Additionally, since APIs have contracts, applications that want to communicate with each other can be built in completely different ways, as long as they communicate in accordance with the API contract. This allows different developers from different organizations in different parts of the world to create highly-distributed applications while re-using the same APIs.
When a user interacts with the front-end of an application (i.e. a mobile app), that front-end makes API calls to back-end systems, simplifying the development process in two main ways:
- The developer doesn’t have to worry about making a customized application for every mobile device or browser.
- Different backend systems can be updated or modified without having to redeploy the entire application every time.
As a result, developers can save time by focusing an individual service on accomplishing a discrete task, instead of spending time writing the logic into their application.
A good example of standard API use
Amazon shopping services' documented APIs enable developers to interface with Amazon shopping as they create their applications. The developer can use the Amazon APIs at the appropriate times in their user experience, to create a seamless customer journey.
For example, this might look something like this:
Corresponding API Calls
1. Search for a good videogame
2. Amazon suggests Minecraft
3. Add Minecraft to my cart
The user interacts with the user interface while the application interacts with back-end Amazon APIs, as defined by the developer. Everything works very well as long as the underlying APIs behave as expected.
….but that is a very big if.
So we arrive at the importance of testing these APIs.
Why do you perform API testing?
Unlike the user, who interacts with the application only at the UI level, the developer/tester must ensure the reliability of any underlying APIs. Without testing the APIs themselves, developers and testers would be stuck, just like a user, testing the application at the UI level, waiting until the entire application stack is built before being able to start testing.
Fortunately, you can instead perform API testing by testing the application at the API level, designing test cases that interact directly with the underlying APIs, and gaining numerous advantages, including the ability to test the business logic at a layer that is easy to automate in a stable manner. Unlike UI testing, which is limited to validating a specific user experience, API testing gives you the power to bulletproof your application against the unknown.
How do you approach API testing?
The best way to approach API testing is to build a solid testing practice from the bottom up. To this end, a great way to design a test strategy is to follow Martin Fowler’s testing pyramid. The pyramid approach suggests that you build a wide array of API tests (e.g. contract, scenario, performance, etc.) on top of a solid foundation of unit tests with UI tests. The API tests allow you to test application logic at a level that unit tests cannot.
These testing strategies are complementary. Testing earlier, at the lower levels of the application, helps you “fail fast and fail early,” catching defects early at their source, rather than later in the SDLC. Unit testing is important, but for now we are focused API testing. What kinds of tests can be done? Why are they important? And how do I do them?
In the next few sections, I’ll walk you through the different types of API tests, including where and why you might use them.
An API represents a contract between 2 or more applications. The contract describes how to interact with the interface, what services are available, and how to invoke them. This contract is important because it serves as the basis for the communication. If there’s something wrong with the contract, nothing else really matters.
The first and most basic type of API tests are contract tests, which test the service contract itself (Swagger, PACT, WSDL or RAML). This type of test validates that the contract is written correctly and can be consumed by a client. This test works by creating a series of tests that pull in the contract and validate that:
- the service contract is written according to specifications
- a message request and response are semantically correct (schema validation)
- the endpoint is valid (HTTP, MQ/JMS Topic/Queue, etc)
- the service contract hasn’t changed
I think of these as your first “smoke tests.” Should these tests fail, there’s really no reason to continue testing this particular service. Should these tests pass, you can move on to start testing the actual functionality of the API.
Component tests are like unit tests for the API – you want to take the individual methods available in the API and test each one of them in isolation. You create these tests by making a test step for each method or resource that is available in the service contract.
The easiest way to create component tests is to consume the service contract and let it create the clients. You can then data-drive each individual test case with positive and negative data to validate that the responses that come back have the following characteristics:
- The request payload is well-formed (schema validation)
- The response payload is well-formed (schema validation)
- The response status is as expected (200 OK, SQL result set returned, or even an error if that’s what you’re going for)
- The response error payloads contain the correct error messages
- The response matches the expected baseline. This can take two forms:
- Regression/diff - the response payload looks exactly the same from call to call (a top-down approach where you essentially take a snapshot of the response and verify it every time). This can also be a great catalyst to identify API change (more about that later).
- Assertion - the individual elements in the response match your expectations (this is a more surgical, bottom-up approach targeted at a specific value in the response).
- The service responds within an expected timeframe
These individual API tests are the most important tests that you can build because they will be leveraged in all of the subsequent testing techniques. Why rebuild test cases when you can simply reference these individual API calls in all of the different types of tests going forward? This not only promotes consistency but also simplifies the process of approaching API testing.
Scenario testing tends to be what most people think about when they think about API testing. In this testing technique, you assemble the individual component tests into a sequence, much like the example I described above for the Amazon service.
There are two great techniques for obtaining the sequence:
- Review the user story to identify the individual API calls that are being made.
- Exercise the UI and capture the traffic being made to the underlying APIs.
Scenario tests allow you to understand if defects might be introduced by combining different data points together.
I ran into a very interesting example of this while working with a customer. They had employed a series of services to call a customer’s financial profile, available accounts, credit cards, and recent transactions. Each of these API calls worked individually, but when you put them together in a sequence they started failing. The reason for this turned out to be a simple timestamp, which, when returned from one API call was in a different format than the one expected in a subsequent request. They didn’t catch this when they were doing unit testing or smoke testing because they had asserted that a timestamp was returned without specifying the format. It wasn’t until testing the overall scenario that it became clear that transferring the timestamp from one call to another caused the breakdown.
Another benefit of scenario testing is the ability to validate expected behavior when your APIs are being used in ways that you did not expect. When you release an API, you are providing a series of building blocks to the world. You may have prescribed techniques for combining these blocks together, but customers can have unpredictable desires, and unexpectedly combine APIs together to expose a defect in your application. To safeguard against this, you want to create many scenario tests with different combinations of APIs to bulletproof your application against a critical breakdown.
Since the component tests form the backbone of the scenario tests, an organization usually has a wider number of scenario tests. They are built when a new functionality is introduced to model the customer’s journey for the new feature. By doing this you really can reduce the amount of time spent on testing because you’re only having to build tests for the new functionality and you know that you have a reliable library of underlying tests to catch anything unexpected.
Performance testing is usually relegated to the end of the testing process, in a performance-specific test environment. This is because performance testing solutions tend to be expensive, require specialized skill sets, and require specific hardware and environments. This is a big problem because APIs have service level agreements (SLAs) that must be met in order to release an application. If you wait until the very last moment to do your performance testing, failures to meet the SLAs can cause huge release delays.
Doing performance testing earlier in the process allows you to discover performance-related issues before you run your full regression cycle. If you followed the testing process up to this point, this is actually going to be pretty easy because you have all of the underlying test cases you need in order to do performance testing. You can simply take your scenario tests, load them up into your performance testing tool, and run them with a higher number of users. If these tests fail, you can trace the failure back to the individual user story and have a better level of understanding for what will be affected. Managers can then use this understanding to make a go or no go decision about releasing the application.
Security testing is important to all stakeholders in your organization. If a security vulnerability is exposed and exploited, it can lead to significant reputation loss and financial penalties. Much like a user can accidentally use your APIs in ways you wouldn’t expect, a user can also intentionally try to exploit your APIs. A hacker can get a hold of your API, discover vulnerabilities, and take advantage of them.
To safeguard against this type of behavior, you need to build test cases that attempt to perform these types of malicious attacks. You can leverage your existing test cases to do so, because a scenario test can provide the attack vector into the application. You can then re-use this attack vector to launch your penetration attacks. A good example of this is combining different types of parameter fuzzing or SQL injection attacks with your scenario tests. That way, any changes that propagate through the application will be picked up by your security tests. To learn more about API security testing, check out my colleague’s helpful blog post.
Because of the multiple interfaces that applications interact with (mobile, web, APIs, databases…), you will run into gaps in test coverage if you test any one of these in isolation, missing the subtleties of the complex interactions between these interfaces.
Omni-channel tests comprehensively cover the application’s many interfaces to ensure thorough test coverage, by interweaving API and database tests into the validation of mobile and web UI interactions. This means taking a test that is exercising one of the interfaces and coordinating it with another – executing your UI tests such as Web (Selenium) or Mobile (Appium) and interlacing them with any of your API or database tests, exchanging data points from the system through the test execution. With effective omni-channel testing, you can create stable, reusable test cases that can be easily automated.
Change is one of the most important indicators of risk to your application. Change can occur in many forms, including:
- Protocol message format change for a service
- Elements added or removed from an API
- Underlying code change affecting the data format returned
- Re-architecture of a service to break it down into multiple parts (extremely prevalent as organizations move to microservices)
As change occurs, you need to have test cases built to identify the change and provide remediation plans. Using a solution that provides analyses to address the impact of these changes will allow you to understand what change has occurred and target the specific tests that are affected.
Change can then be captured in the form of a template, to update any of the underlying component or scenario tests with new functionality. Since the rest of your tests reference these tests, the impact of change will be reduced.
Building a solid API testing strategy is the best way to ensure that your applications “work the same today as they did yesterday” (it’s more than just a catchy phrase). API testing allows you to build a solid framework for identifying defects at multiple layers of your application. These tests can all be automated and run continuously, so you can ensure that your application is aligned to business expectations while also functionally precise. Since API tests work at a much lower level than UI tests, you know that you will have consistency and the tests that you are building will last for a long time to come.