API Testing: Top 5 Myths [Infographic]
We're fast approaching a time when all communications between humans and machines, as well as machines and machines, will ultimately be driven via APIs. To truly protect your brand in this "API Economy," it's essential to have solid processes for the identification, integration, and testing of APIs in conjunction with your business priorities.
After all, if your application fails to deliver the expected business results, your customers and partners won't care if that failure stems from code you developed internally or from an external API that you've integrated. If you consume it, you own it.
With so much at stake, there's no better time than the present to examine the reality behind some of the most common API testing myths emerging across the industry…
API Testing Myth #1: We don't need to test the APIs we consume—especially if they have SLAs
If an API that your application relies upon fails, the user naturally assumes that your application is "buggy," regardless of whether the fault lies within the components you developed or the APIs you are consuming. Your application is your responsibility—including whatever open source software and APIs your organization decided to adopt. At the same time that you gain the functionality provided by third parties, you also expose yourself to whatever risks they introduce (e.g., Heartbleed, outages stemming from Facebook Connect glitches, etc.)
Whenever you integrate an API into your own transactions, you're assuming the risks associated with that API's integrity (or lack thereof). Finger pointing does little to foster customer satisfaction and brand loyalty. Sure, the APIs that other organizations expose should be reliable and secure—but if they're not, do you want to find out now or later…after they've opened the door to problems such as brand erosion, customer abandonment, and lost revenue?
How rigorously you need to test each API that you consume really boils down to how critical it is for your business process. If you're responsible for an airline's online check-in process, an API that adds a weather forecast to the "print-at-home" boarding pass probably doesn't warrant extensive testing. However, if you need to interact with a partner API to deliver the appropriate flight data for codeshare flights, this is a prime candidate for more aggressive testing.
API Testing Myth #2: It's already covered by our GUI testing
Many times, when we ask a prospective client how they're approaching API testing, we get the response, "It's covered by our GUI testing." If a GUI is indeed available (a big if with APIs), of course you will want to exercise the API via the GUI as part of your test plan. GUI testing is undeniably an indispensable part of any end-to-end testing process (assuming that a GUI is relevant, of course). However, GUI testing alone doesn't effectively exercise APIs. Consider the following…
If the API is publically exposed (or somehow discoverable), it could be accessed directly. Are you confident that it can handle the broad scope of inputs it could face from both innocent misuses and malicious attacks?
Obtaining thorough API coverage by manipulating the application at the UI level is challenging. It might be feasible to test some "happy paths" and a selection of corner cases and negative test scenarios. However, discoverable APIs need to be exercised against a broad range of behavior, data, performance, and security scenarios—and such conditions are difficult-to-impossible to configure from the UI. Yet, without such extensive testing, you can't rest assured that your test results are predictive of real-world behavior, and you can't be confident that the API is robust enough to withstand API exploits...
Want to keep learn more? Get the complete Top 5 API Testing Myths paper here.
Parasoft’s industry-leading automated software testing tools support the entire software development process, from when the developer writes the first line of code all the way through unit and functional testing, to performance and security testing, leveraging simulated test environments along the way.