Data Case Execution

Having defined the relationships between files on which the extraction is to be based, together with the selection logic and values, the next stage is to run the extract to produce the copies of the files. This facility enables you to initiate the execution of a Data Case either interactively or in batch. You will reach this screen by selecting option 1 from the Work With Data Cases Display or by issuing the Extract_IT command.

Entries

It should be noted that these entries only affect the current execution of the Data Case and are not updated into the Data Case description.

Run Mode This controls how the Data Case will be executed, ‘1’ for interactive execution or ‘2’ for batch execution. The default run mode is stored against the Data Case and the override of this value can be prevented in System Values.

Target Library Key the name of the library in which the results of the extraction process are to be placed. This must be a valid OS/400 object name and if it does not currently exist as a library, the library will be created when the Data Case is executed. This option is not shown when executing an Alter type Data Case. The default target library name is stored against the Data Case and the override of this value can be prevented in System Values.

Add/Replace Data *REPLACE will clear members in the files in the target library if they exist before populating them with the results of the new selection. *ADD will leave the data in the target files and then add any additional records that meet the selection rules. If the Physical File is keyed, duplicate key records will not be added. This option is not shown when executing an Alter type Data Case. The default value for this option is stored against the Data Case and the override of this value can be prevented in System Values.

Print Report Key 1 to print an optional report showing details of the Data Case execution. For cases involving data extraction this will include the number of records originally in each file and the number now in the target library. This includes an estimate of the disk space occupied by the new versions of the files. The report for Data Cases where files have been updated will show the number of records which were altered.
There is also a section at the end of the report which provides a summary of the above information for files that are present in the Data Case more than once.

Function Keys

F7– Edit Data Case Change Data Case details prior to execution.

Data Case Object Check

Any object, field or AFD errors are reported on the pre-check screen and also any potential conflicts relating to the way in which the Data Case has been set up. Some of the possible errors are as follows:-

• Object does not exist.
• Member does not exist.
• Cannot allocate the object.
• Authority error, for example the user does not have object management rights to duplicate the object. For remote extraction the remote user profile must also have *CHANGE authority on the original library.
• Field error exists.
• Key fields cannot be updated as this will potentially cause duplicate records.
• Sampling is not valid for Archive or Alter type Data Cases.
• Format Level ID’s of the original and target files do not match.
• File with greater than 2000 fields may not be processed – this is due to an iSeries field size limitation. If F9 is pressed to continue with the execution and the size limit is exceeded, the Data Case will halt and a message will be placed in the job log. This check is based on the assumption that field names are six characters in length. If they are longer than this the maximum number of fields on a file that can be successfully extracted will be fewer.

All errors reported here are warnings only and the Data Case execution can proceed by pressing F9.

Options

2 – Change Access the Object Details display where, for example, field errors can be investigated and corrected.

5 – Errors Review details of any reported errors if extra information is available.

8 – Object Locks Use this option to help resolve any file allocation issues.

Function Keys

F5 – Retry Re-verify all objects in the Data Case.

F9 – Proceed Run the Data Case.

Data Case Execution Background
When a Data Case is executed into a Target library the objects necessary for the extraction are automatically created before the data is extracted, if not already there. These objects include the data areas and physical files as well as optionally all logical files that are built uniquely over each physical file. Depending on the ADD/REPLACE option, either existing data in the target files will be totally replaced, or added to.

If you have opted to create logical files, Multi-format and Join Logicals will be created in the Target Library, even if physical files needed to create these Logicals are not in the Data Case. The based-on physical files will be created as empty in the Target library, these will be indicated in Data Case Results and on the report. The only criterion is that all dependant Physicals and the Logical must exist in the same library, otherwise the Logical will not be created and an exception message will be written to the Data Case report. Other exception messages will be written to the report if objects or data are unable to be copied

Performance
Extract_IT uses the IBM i Query /SQL engine to execute the extractions and performance will thus be the best that your iSeries can deliver. If you would like to improve the performance of a particular extraction, then you should examine its Job Log to determine which access path, if any, that Query Manager decided to use. It may be appropriate to create an additional access path if all those that are available have been rejected. For further information we would direct you to the relevant IBM manuals and Red Books.
If the Sampling options are being used then the extraction time will be increased as the database file has to be read on a record by record basis rather than by other blocking methods.

Run Time Errors
If any part of an Extract_IT definition is invalid during execution, it will be reported in the Job Log and totally ignored during processing. Typical errors will be definitions which specify a field that no longer exists.