Execution of test cases is possible in several ways
directly from the Int4 APITester Cockpit
by using available API’s: REST, SOAP, RFC, SAP GUI Transaction, read here for more details: API to integrate Int4 IFTT Suite with other software
by running SAP GUI dedicated transaction
/INT4/IFTT_RUN
- this also allows scheduling test runs in the background, read more about background scheduling here: Background processing
...
Panel | ||||||
---|---|---|---|---|---|---|
| ||||||
All of the above methods end up in the same mechanics of execution, with the exception of certain eCATT Test Scripts. If the eCATT script features SAP GUI steps, the test needs to be run using a SAP GUI mode, available through additional test parameters. |
Warning |
---|
Customer is not allowed to excecute test cases on a productive systems |
Triggering test execution from Int4 APITester Cockpit
...
For certain scenarios, You might want to override the parameters defined in the Automation Object for a particular test run. Choose the parameter name and provide the new value. Multiple parameters can be altered.
Load / Performance Testing
...
Int4 Suite can be used to test the behaviour of integration platform under load. This is enabled by following features
...
The test run summary screen details the specifics of the test run and shows the status, for each test case in the list at the bottom and summary test status on the top.
In the summary section, user can navigate to:
Performance Analysis ( if performance data is available )
Error Summary
Legacy Report
Info |
---|
Test Run is successful if all of the test cases in the run are successful. Single failure of test case fails (but not stops) the run. |
Test Case information follows the details from the APITester Cockpit. Clicking on the arrow on the right row opens the test details for particular test case.
Error Summary
...
In this section errors from all of the test cases are summed up, aggregated by text and location, showing message count. User can see the list of test cases that include specific message by clicking on a button at the end of the row. Clicking on one of the test cases will navigate to its run details.
Execution Report - Test Case Run Details
...
Payload Comparison shows the reference message (on the left) and current execution (on the right) in a structurally aligned way, highlighting the differences.
Yellow - difference is acceptable and triggers a warning, the test case will pass
Red - difference is unexpected and triggers an error, the test case will fail
The arrows on the left allow to quickly jump to next difference
Black arrows jump to next difference
...
Red arrows jump to next error
...
Backend validation feature connects to the backend databases, extract the relevant documents and compares them accordingly to DB Validation Ruleset assigned to the Automation Object.
Depending on Automation Object settings, backend document can be fetched during test case creation or test case execution.
The comparison is executed on field level, for each of the defined tables and fields. Similarly to the Payload Validation, the expectation is that documents will be identical. Known and expected differences can be defined in the DB Validation Ruleset for fine control of the results.
...
Show reference document - shows the reference document data in gray
Show technical field names - shows the technical names of tables and fields
Color coding is used to highlight report results
Green- data is the same or matching, based on DB Validation rules
Yellow - difference is acceptable and triggers a warning, the test case will pass
Red - difference is unexpected and triggers an error, the test case will fail
...