Results Comparison

There are many options within TestBench that enable the detailed results of any test to be thoroughly analysed and validated. These options have been discussed earlier in this chapter and include file effects, Data Rules, job log messages, screens, parameters, data areas, data queue messages and reports. For any given Test Run it is also possible to run a comparison against any other Test Run within the same Test Case. Therefore if a Run has already been checked and the results are considered correct, this Run can be used as a baseline by which the success of other Runs can be measured. It can also be used for regression testing to ensure that a development has not had a wider impact than was expected.

Results comparisons can either be run on an adhoc basis or they can be automatically executed at the end of a Test Run. The results of either method are then stored in the TestBench results database with all other test results until another comparison is generated or the Run is cleared or deleted.

Work With Baselines
Baselines are used to identify key Test Runs against which other Runs will be compared. A baseline might be a Test Run where all results have been validated and are correct or it might be for a pre-changed version of a program against which the new version must be checked. Multiple baselines can exist for one Test Case, one for each script and one where no script was re-played.

Results comparison can be used without setting any baselines. However, if the Test Case Option has been selected to automatically run the results comparison at the end of the test, a baseline must exist. If it does not, no automatic comparison will take place.

Baselines are defined using F10 from the Work With Test Runs display.

Baseline Status A Test Run is marked as a baseline using option ‘2’ on the above screen. The baseline can be either active or inactive, but there can be only one active baseline for a script. When a new baseline is activated, all previous baselines for the script are made inactive, enabling the baseline history to be clearly viewed.

Options

2 – Edit Change the baseline settings, see below for more information. When a Test Run becomes a baseline the Baseline Status on the above screen will be ‘Active’.

4 – Remove Use this option if you no longer wish this Test Run to be a baseline run. A confirmation window is displayed. The Baseline Status of ‘Active’ is cleared.

8 – Audit Every addition, update and deletion is tracked and displayed.

Function Keys

F8 – Expand/Drop Expand the single line display to two lines showing Test Run description and Script name if applicable.

F10 – All Baselines Toggle between viewing all Test Runs, all Baselines (active and inactive) or all active Baselines.

Baseline Settings
Baseline settings can be edited by keying an option ‘2’ next to a Test Run on the Work With Baselines display.

Baseline The Test Run number that is being edited.

Script The name of the script that was re-played during this Test Run. This field will be blank if no script was used. This applies for Native Record and Play type scripts.

Status A Baseline can be either active or inactive. If a Baseline already exists for this script, creating another Baseline for the same script will cause the previous one to become inactive. The default value is ‘1’ for all newly created Baselines.

Response The percentage margin within which screen response times must fall during the comparison in order to match, as it is very unlikely that response times will be exactly the same on two different runs. This is defaulted from a system value but can be overridden here.

Description Optional field that enables text describing the Baseline to be keyed.

Elements These are the individual results that will be compared; by default all types are selected. Options ‘1’ and ‘2’ can be used to toggle comparison for each of these elements on and off. If an element is not active, it will not be used in any comparison to this Baseline and will not appear on the Test Run Summary screen accessed with option ‘15’ from the Work With Test Runs display. Option ‘3’ is only valid for Database Images and enables more detail to be specified for field and library matching. See the following section for more information.

Comparing Test Runs
Any Test Run can be compared to any other within the same Test Case using option ‘14’ from
Work With Test Runs. Comparisons can also be executed automatically at the end of a Test Run by selecting the Test Case option for Automatic Result Compare. If this is switched on, the Test Run will be compared to the Baseline for the same script name or the Baseline with no script if a script was not used during this run. If no suitable Baseline can be found the comparison will not take place. Warnings highlighting differences in the comparison will automatically be generated for the Test Run regardless of which comparison method is used.

Option ‘14’ from Work With Test Runs produces the following screen:

Test Run The Test Run number that is being compared.

Compare Run This defaults to the current comparison or alternatively the matching Baseline Run if one exists, otherwise it will be zero, but in all cases it can be overridden to the Test Run against which the selected Run should be compared.

Function Keys

F3 – Exit Exit the screen, go back to the prior screen..

F4 – Prompt Display a list of all Test Runs for this Test Case and their associated scripts if present, from which a comparison Test Run can be selected.

F8 – Compare Settings View the Compare Settings on the following screen.

F12 – Cancel Cancel out of screen and return to prior screen.

Elements Pressing F8 Compare Settings will yield the Result Compare Module Selection window.
The elements listed are the individual results that will be compared, the default options are copied from the Baseline. Options ‘1’ and ‘2’ can be used to toggle comparison for each of these elements on and off. If an element is not active, it will not be used in any the comparison and will not appear on the Test Run Summary screen accessed with option ‘15’ from the Work With Test Runs display. Option ‘3’ is only valid for Database Images and enables more detail to be specified for field and library matching. See the following section for more information.

Detail Field Comparison
The following screen is displayed when option ‘3’ is keyed next to the Database Images element on the above display.

File matching to include library
By default this option is switched off which means that when two Test Runs are being compared, database images will be matched on Sub Run, File and RRN only, the library in which the file resided is ignored. This is important for example if your Baseline was created by running tests over a different test library, or if you are using data protection which creates a new temporary run-time library every time the test is executed. If the option is switched on, both the file and library name must be the same for the images to be considered a match and subsequently compared.

Produce file compare report
Generate a summary report showing the high level results for each file comparison and a detailed report of all data differences encountered.

Method to compare database results
Either compare all files by their RRN or by their physical file keys. For files which have no keys on the physical file, the key sequence can be specified with option 1.

1 – Select Display a list of fields on the file which can then be excluded from the image comparison. For example, you may choose to ignore dates and times which will always present differences. Also specify a key sequence for the comparison if required.

Comparison Results – Summary
Results of all comparisons, whether executed automatically at the end of a Test Run or run manually with option ‘14’ from Work With Test Runs, can be viewed by using option ’15=Summary’ from the Work With Test Runs display.

If no comparisons have been executed for this Test Run, statistics for this Run only will be shown.

Test Element These are the areas of TestBench results that have been compared; the full list is Joblog, Database Counts, Database Images, Screens, Data Areas, Data Queues, MQ Messages, Program Calls and Reports. If the Test Run against which option ‘15’ was keyed does not have results stored for all of these elements, only those for which results exists will be shown. Further details about each of these elements are given below.

Description The specific details of the test elements that are being compared.

Baseline The summary information relating to each item for the baseline run (the run against which this run was compared). This column is not displayed if a comparison to another run has not previously been executed.

This Run/Test Run The summary information relating to each item for the run being compared.

Diff An asterisk in this column indicates that a difference has been found between the baseline run and the run being compared. This column is not displayed if this run has not previously been compared to another run.

Options

5 – Display View further details about the results element for each Sub Run. See below for more information.

Function Keys

F8 – Compare Settings Display the Test Run Result Comparison screen to view the settings that the comparison was executed with and optionally change them and execute the comparison again.

F21 – Run Options Display some basic details about both Test Runs involved in the comparison which may have an impact on the comparison results.

Comparison Results – Details
Each of the following screens is accessed with an option ‘5’ from the Test Run Summary screen.
Submitted jobs are indicated by a ‘/’ prior to the ID. They all have the following options if two runs have been compared, if not only details pertaining to the current Run can be accessed.

5 – Display Baseline View the details stored in the results database for this element and Baseline Sub Run, as is accessible from the Sub Run Detail screen.

6 – Display Test Run View the details stored in the results database for this element and Sub Run being compared, as is accessible from the Sub Run Detail screen.

Joblog

A difference is indicated on the Test Run Summary screen if:

• The number of messages with any of the severity categories is different for any of the Sub Runs.

Therefore it is possible that the total number of messages in each Run is the same but a difference is still highlighted.

Database Counts

A difference is indicated on the Test Run Summary screen if:

• The number of records written, updated or deleted for any of the files impacted by any of the Sub Runs is different.
• The number of records written, updated or deleted and then rolled back for any of the files impacted by any of the Sub Runs is different.

Therefore it is possible that the summary figures are the same but a difference is still highlighted.

Database Images

The figures on the Test Run Summary screen refer to the number of records that were updated during the test and not the total number of updates. Therefore if a single record was updated more than once, the value for the Database Counts could be higher than the value for the Database Images.

A difference is indicated on the Test Run Summary screen if:

• Any of the database images for any of the Sub Runs is different.

Therefore it is possible that the summary figures are the same but a difference is still highlighted.

Key an option ‘5’ to view detailed comparison results as shown below. Refer to the results section of the Compare Cases chapter for more information on this display.

Screens

A difference is indicated on the Test Run Summary screen if:

• The number of screens replayed for any of the Sub Runs is different.
• The screen titles replayed for any of the Sub Runs is different (shown as an asterisk next to the ‘Number of screens’ item).
• The number of differences on any of the screens is different.
• The difference in response times on any of the screens falls outside of the response margin percentage.

Therefore it is possible that some of the summary figures are the same but a difference is still highlighted.
Screens relate to Native Record & Playback.

Data Areas

A difference is indicated on the Test Run Summary screen if:

• The number of data areas that were updated by any of the Sub Runs is different.

Only the first 25 characters of each data area are displayed on the above screen but the complete data area is compared, therefore the difference may not be evident until options ‘5’ and ‘6’ are used.

Data Queues

A difference is indicated on the Test Run Summary screen if:

• The number of data queues involved in any of the Sub Runs is different.
• The number of messages sent or received by any of the Sub Runs is different.

Therefore it is possible that the summary figures are the same but a difference is still highlighted.

MQ Messages

A difference is indicated on the Test Run Summary screen if:

• The number of message queues involved in any of the Sub Runs is different.
• The number of messages sent or received by any of the Sub Runs is different.

Therefore it is possible that the summary figures are the same but a difference is still highlighted.

Program Calls

A difference is indicated on the Test Run Summary screen if:

• The number of calls for any given program and Sub Run combination is different.

Therefore it is possible that some of the summary figures are the same but a difference is still highlighted.

Reports

A difference is indicated on the Summary screen if:

• The reports (identified by name and user data) generated by any of the Sub Runs is different.
• The reports which do exist in both Sub Runs contain data differences.

2 – Edit Rules Modify the rules that control how the report comparison should take place. You can change the rules here and then re-run the result comparison to see them take effect. See the System chapter for a full explanation of the rule definition.

5 – Display View the report comparison results.

When option 5 is selected, a screen similar to the following is displayed. If key breaks are specified, then the reports will be compared for any information that matches. Data that is unique to the baseline or compare report will be missing or extra information.

Essentially, using identifiers will allow a more accurate compare to execute and minimises the effect of one report having one or more entries than the other.

Each line which contains a difference is displayed with the text ‘Miss’ (line exists in the Baseline run only), ‘Extra’ (line exists in this run only) or ‘Diff’ (line exists in both runs but contains some different data). Missing and extra lines will only be encountered if key breaks have been set up within the comparison rules.

For report lines showing a difference or a missing line, click on the highlighted area to display an additional window which contains the Selected Run value and Baseline Run value and page number.

F6 – Rules View the rules that controlled how the comparison occurred for this run, they cannot be changed here.

F16 – Find Error Position to the next error on the report.

As of version 8.3.0, binary value support was improved for data types SMALLINT and INTEGER (or INT)
by extending the range of supported values.
The data type of BIGINT is now supported too.