Execution of test cases is possible in several ways
directly from the Int4 APITester Cockpit
by using available API’s: REST, SOAP, RFC, SAP GUI Transaction, read here for more details: API to integrate Int4 IFTT Suite with other software
by running SAP GUI dedicated transaction
/INT4/IFTT_RUN
- this also allows scheduling test runs in the background, read more about background scheduling here: Background processing
...
Test Run Name - name to identify test run - free form text description, defaults to date and time
Environment - the environment where the test cases should be executed, defaults to the one specified on test case, or on folder level
Test Run type - select the applicable type of test run - used for internal reporting to differentiate between various test runs (e.g. unit testing, regression, test design). Some of the Test Run Types might have special features (e.g. Change Request Test - enables Change Request Payload Validation rules, if they are present)
Execute New TCs Only - will execute only these test scripts which were not yet tested
Debug Mode - provides a detailed log, useful for test design and debugging
Execute via SAP GUI - downloads and runs SAP GUI link, which (on configured system) will start SAP GUI and open the testing transaction. This is needed for certain scenarios which utilize SAP GUI based eCATT Scripts.
Number of Attempts to Read TC Results - the number of times that Int4 Suite will poll for data after test execution
Delay Between Attempts to Read TC Result - delay between each consecutive attempt to read test results
Delay Between Execution/Validation
For Each Test Case - delay between test case execution start and first attempt to read TC results, for each test case
Once Per Test Run - as above, but applied to all test cases in parallel.
...
Payload Comparison shows the reference message (on the left) and current execution (on the right) in a structurally aligned way, highlighting the differences.
Yellow - difference is acceptable and triggers a warning, the test case will pass
Red - difference is unexpected and triggers an error, the test case will fail
Green - difference is expected, the test case will pass (only for Change Request Test)
The arrows on the left allow to quickly jump to next difference
...
Show reference document - shows the reference document data in gray
Show technical field names - shows the technical names of tables and fields
Color coding is used to highlight report results
Light Green- data is the same or matching, based on DB Validation rules
Green - difference is expected, based on Change Request Payload validation rules
Yellow - difference is acceptable and triggers a warning, the test case will pass
Red - difference is unexpected and triggers an error, the test case will fail
...