API Integration Test Strategy
API Integration Test Strategy
Enterprise Integration
Points Testing
Author: Sai krishna
Creation Date:
Last Updated:
Version: Initial Draft
Introduction
Integration is a topic that can’t be ignored for enterprise applications, not only because
integration with external systems could be error prone, but also because they are hard to test.
This article introduces a commonly applicable testing strategy for integration points, which
improves the coverage, speed, reliability and reproducibility of testing, and thus could be used as
a reference for implementing and testing integration-heavy applications.
1.Contract Tests
An API represents a contract between 2 or more applications. The contract describes how to
interact with the interface, what services are available, and how to invoke them. This contract is
important because it serves as the basis for the communication
The first and most basic type of API tests are contract tests, which test the service contract itself
(Swagger, PACT, WSDL or RAML). This type of test validates that the contract is written correctly and can
be consumed by a client. This test works by creating a series of tests that pull in the contract and
validate that:
2.Component Tests
Component tests are like unit tests for the API – you want to take the individual methods available in the
API and test each one of them in isolation. You create these tests by making a test step for each method
or resource that is available in the service contract.
The easiest way to create component tests is to consume the service contract and let it create the
clients. You can then data-drive each individual test case with positive and negative data to validate that
the responses that come back have the following characteristics:
The request payload is well-formed (schema validation)
The response payload is well-formed (schema validation)
The response status is as expected (200 OK, SQL result set returned, or even an error if that’s
what you’re going for)
The response error payloads contain the correct error messages
The response matches the expected baseline. This can take two forms:
o Regression/diff - the response payload looks exactly the same from call to call (a top-
down approach where you essentially take a snapshot of the response and verify it
every time). This can also be a great catalyst to identify API change (more about that
later).
o Assertion - the individual elements in the response match your expectations (this is a
more surgical, bottom-up approach targeted at a specific value in the response).
The service responds within an expected timeframe
3.Scenario Tests
Scenario testing tends to be what most people think about when they think about API testing. In this
testing technique, you assemble the individual component tests into a sequence, much like the example
I described above for the Amazon service.
1. Review the user story to identify the individual API calls that are being made.
2. Exercise the UI and capture the traffic being made to the underlying APIs.
Scenario tests allow you to understand if defects might be introduced by combining different data
points together.
4.Performance Tests
Performance testing is usually relegated to the end of the testing process, in a performance-specific test
environment. This is because performance testing solutions tend to be expensive, require specialized skill
sets, and require specific hardware and environments. This is a big problem because APIs have service
level agreements (SLAs) that must be met in order to release an application. If you wait until the very last
moment to do your performance testing, failures to meet the SLAs can cause huge release delays.
5.Security Tests
Authentication: Identifying the end user with the key .
Authorization: Providing identified user access to correct resources/data through an access
token using private & public key.
Encryption: Hiding information from unauthorized access.
Signatures: Ensuring information integrity, so as to check that API requests or response have not
been tampered within transit. Short lived and expires automatically after few seconds or
accessing it.
Because of the multiple interfaces that applications interact with (mobile, web, APIs,
databases…), you will run into gaps in test coverage if you test any one of these in isolation,
missing the subtleties of the complex interactions between these interfaces.
7.Managing Change
Change is one of the most important indicators of risk to your application. Change can occur in many
forms, including:
Point-to-Point Messaging Model is also known as P2P Model. Below diagram shows typical Point-To-
Point Messaging model in any Messaging system.
Point-To-Point Testing:
Durable Messaging Model: Durable Model is also known as Persistent Messaging Model. In this
model, Messages are stored in some kind of store in JMS Server until they are delivered to the
destination properly.
Non-Durable Messaging Model: Non-Durable Model is also known as Non-Persistent Messaging
Model. In this model, Messages are not stored in the JMS Server.
Each message can have multiple consumers.
Publishers and subscribers have a timing dependency. A client that subscribes to a topic can
consume only messages published after the client has created a subscription, and the subscriber
must continue to be active in order for it to consume messages.
Note: In total PUB needs more time while testing ( 70%) and SUB is tested as per the target systems
Unused flags
Multi-threading issues
Improper errors
SOAPUI Project is the user layer from where the Test Request is invoked which internally utilizes the
data source for different request inputs.
Framework Layer is the logical layer in the framework where all the business logic, re-usable
libraries and the validation mechanism is built in which applies for the all the multiple request fired
from the presentation layer.
Result Reporting is the physical layer where test outcomes are stored
The Ready API Automation Framework provides:
The Automation Framework has pre-defined properties that help the Groovy scripts to set
configurations with the values test engineers created in the Test Suite.
JDBC: Using JDBC the data in the response is also checked and validated through the database.
Test runner : The test runner allows to run Ready API tests and export results from command
line as well
Reports/Logs: For every test run the logs is generated and stored in a specified location as
mention in the configuration. The log contain all the request and response on each run which
stored with time stamp. There is an excel report at the end confirm the coverage of the total API
executed and its overall pass/fail/not run status.
Techniques:
Groovy script for data transfer.
Running test suite usingtestrunner.bat.
Storing the results dynamically using timestamp.
Data checking through JDBC.
Generating execution log and test summary report through custom code (Groovy script).