Rigorously test complex chains of APIs with a model-based approach
APIs provide businesses with the flexibility to innovate rapidly and extend their core offerings to new users; however, this flexibility also brings massive complexity for testing. A model-based approach can be used to match the speed and variability of modern software delivery.
Rigorous API testing must overcome massive complexity, reckoning with a vast number of possible test cases. The message data needed to reach endpoints must “cover” every distinct data combination of values. That includes data values entered by users, as well as the unique actions they perform against a system. It also includes machine data generated by user activity, for instance content-type and session IDs.
API tests must also account for the journeys through which the data can flow through the APIs. They must cover the combinations of API actions and methods that can transform data on its way to reach certain endpoints.
But APIs don't exist in isolation. They, by definition, connect multiple systems or components, so every test is therefore end-to-end test in some sense. A rigorous set of API tests must therefore account for the vast number of combined actions or methods that can transform data as it flows through connected-up APIs.
An unrealistically simplified example would include 1000 combinations of user inputted data, 1000 different combinations of machine generated data, and 1000 distinct journeys through the combined actions:
That’s already 1 billion combinations, each of which is a candidate for an API test. Rigorous API testing must therefore select a number of test cases that can be executed in-sprint, while still retaining sufficient API test coverage.
Too many tests, not enough time!
Unfortunately, the testing techniques used in API testing are often too manual and unsystematic for rigorous API testing. Business-critical APIs risk being under-tested at each point of the testing lifecycle:
- Creating API tests one-by-one in test tools or through scripting is too slow and ad hoc to hit even a fraction of the possible combinations.
- Expected results are hard to define from service definitions and requirements. Second-guessing whether a Response is "correct" undermines the reliability of API testing.
- Test data then lacks the majority of combinations needed for rigorous API testing. Low-variety copies of production data focus on expected scenarios that have occurred in the past. They lack outliers and negative combinations, as well as data for testing unreleased functionality.
- When it comes to API test execution, there is often not access to in-house and third-party systems. Components might be unfinished or in use by another team, or a third-party might not provide sandboxes for testing. Environmental constraints therefore further undermine API testing agility.
Testing complex chains of APIs instead requires an integrated and automated approach. API testers must be able to identify the smallest set of API tests needed for API testing rigour, systematically creating the test data and environments needed to execute them.
Model-Based API Testing
To overcome the complexity of API call chains, teams can benefit from a model-based approach to API testing, in which testers can generate everything needed for rigorous API testing from easy-to-use models.
Here's how it works:
- Model-based test generation creates API tests that “cover” every distinct combination of data and method involved across chains of APIs. This applies mathematical algorithms to mathematically precise models. The models are built quickly from imported service definitions and message recordings. Dragging-and-dropping the re-usable flowcharts assembles end-to-end tests for complex chains of APIs, enabling rigorous testing within short iterations.
- Accurate test data and expected results are generated simultaneously for every test. Expected results are simply the end blocks in the flowcharts, and Test Modeller furthermore finds or makes data “just in time” for every test it generates. API testers can select a comprehensive range of data generation functions and repeatable Test Data Management (TDM) processes at the model level. These resolve “just in time” during test generation, compiling coherent data sets that are tailor made for each end-to-end test.
- Virtual data generation produces the Request-Response pairs needed to simulate missing or unavailable components. Virtual data generation creates accurate Responses for every possible Request. This repeatable TDM process is also called during test generation or execution, ensuring that each test generated from the central models is equipped with accurate test data and environments.
With this integrated approach, QA teams can themselves generate everything needed for rigorous API testing. Maintaining central flowcharts keeps the tests, data and virtual services aligned, testing complex chains of APIs within short iterations.
Download the latest Curiosity-Parasoft eBook to discover how this approach can maximise the speed and rigour of your API testing.
Tom Pryce is a technologically hands-on Communication Manager with Curiosity Software Ireland. His interests include Model-Based Testing, test data management, and Robotic Process Automation. He tweets under @Curiositysoft.