Database Procedures for Fun and Profit

June 1, 2016 | Author: rahmathidayattrimo89 | Category: Types, Presentations
Share Embed Donate


Short Description

Database...

Description

DATABASE PROCEDURES

FOR FUN and PROFIT Gary Cherlet Original 1991 Version Updated January 2015

Database Procedures for Fun and Profit: 3 Case Studies - Security, Extended Locking and Creating Sequential File Output

Contents 1. Database Procedures - Background.................................................................................................................... 3 1.1 CODASYL Implementation............................................................................................................................. 3 1.2 Schema Specification .................................................................................................................................... 3 1.3 Most Common Uses ...................................................................................................................................... 4 1.4 WHY aren’t Database Procedures More Heavily Used? .............................................................................. 4 2 Case Studies ......................................................................................................................................................... 4 2.1 A database procedure for row level, data content based security............................................................... 4 2.1.1 Requirement and Functionality.............................................................................................................. 4 2.1.2 Alternative Solutions - Pros and Cons .................................................................................................... 5 2.1.3 The Implemented Procedure - Schema Specification and Results......................................................... 6 2.2 A facility for extended locking of database identities without DB Key deadlocking ................................ 7 2.2.1 Requirement and Functionality.............................................................................................................. 7 2.2.2 Alternative Solutions - Pros and Cons .................................................................................................... 7 2.2.3 The Implemented Procedure - Schema Specification and Results......................................................... 8 2.3. Creating “sequential" output from IDMS with minimal overhead .............................................................. 9 2.3.1 Requirement and Functionality.............................................................................................................. 9 2.3.2 Alternative Solutions -- Pros and Cons ................................................................................................. 10 2.3.3 The Implemented Procedure - Schema Specification and Results....................................................... 13 3. Design and Coding Considerations ................................................................................................................... 14 3.1 Which Language to Write the Database Procedure With?......................................................................... 14 3.2 Special Considerations for zOS Assembler.................................................................................................. 15 3.3 Known Bug in Database Procedure Processing........................................................................................... 15 Appendix A

Data Access Rule Representation for ETSA's Security Procedure ........................................... 16

References ............................................................................................................................................................ 16 Ref: 1 Database Systems: A Practical Reference............................................................................................ 16 Ref: 2 CA-IDMS Database Administration Guide – Vol 1 .................................................................................. 16 Ref: 3 CA-IDMS System Operations Guide ........................................................................................................ 16

1. Database Procedures - Background 1.1 CODASYL Implementation The original CODASYL database language extensions to COBOL consisted of two sublanguages: Data Definition Language (DDL) and Data Manipulation Language (DML). In addition to the sublanguages, users were empowered with the ability to code their own routines called database procedures. Database procedures can replace or supplement specific functions of the DBMS. For example, specific entry points enable procedures to handle the following tasks:       

Validity checks on privacy keys Calculation of database keys for records Computation of derived data items Validation if items being STOREd or MODIFYd into the database Handling of error conditions Recording of usage of specific data unit types Transformation of data from subschema to schema format

Database procedures communicate with the DBMS and the run unit that caused it to be invoked (Ref: 1). Database procedures have some of the appearances of the most simplistic forms of Object Oriented Programming (00P), in that activity on an entity can trigger processing logic which is specific to the entity and type of activity. Any similarity ends there however, as database procedures cannot initiate activity against other entities. Also, the range of activities that can trigger database procedures is largely limited to DML, physical activity against the database. There is no ability to define logical activities such as "create an order", which may require the triggering of processes against ORDER, ORDER-ITEM and CUSTOMER-ACCOUNT, for example.

1.2 Schema Specification Database procedures are specified by the Database Administrator (DBA) through the Schema data definition language (DDL). The DDL associates the database procedure with the desired data unit, such as RECORD or AREA, with a CALL statement. The procedure can be specified as happening BEFORE, AFTER or ON ERROR for the database entity. The general format of the CALL_ statement is: ADD/MODIFY datababase-entity entity-name CALL procedure-name when-to-invoke DML-verb. A well-known example of a database procedure CALL is: ADD/MODIFY RECORD COMMENT-TEXT CALL IDMSCOMP BEFORE STORE CALL IDMSCOMP BEFORE MODIFY CALL IDMSDCOM AFTER GET. Most DBA's will recognise this as the Schema DDL specification for implementing compressed records in IDMS databases (before PRESSPAK). The example illustrates several significant concepts:  More than one procedure can be associated with a database entity,  Triggers can be set at different times for the same DML verb, and  Procedures can be associated with many DML verbs.

1.3 Most Common Uses Implementing compressed record types is undoubtedly the most common use for database procedures. The Database Operations manual suggests that other "common" uses might be for security and data validation processing. My own experience is that until becoming involved with the three case studies discussed in this paper - such processing is quite uncommon. The case studies which we’ll be discussing are the first uses of database procedures that I have seen anywhere. Feedback on other users' experiences with database procedures would certainly be welcome.

1.4 WHY aren’t Database Procedures More Heavily Used? How can we explain such minimal use of this, potentially very powerful, feature of the database language? I think that are three reasons. The first is that many IDMS users believe database procedures must be written in zOS Assembler, whereas COBOL may be used and the Database Operations manual actually provides a COBOL example (Ref2). The second reason has to do with the perception by application developers that database procedures would only ever' be used by the DBA to implement some technical "whiz-bang" trick that will make IDMS work smarter/faster/leaner in some way. Finally, I think that the interface between database procedures and the procedural, DML programs is somewhat restrictive for effective application use. Take for example the use of a database procedure for data validation. There may be many different reasons for the procedure to reject the data values in a record. How is the procedural program to determine which reason caused the record to be rejected, so that the user can be given a precise error message? The most common way of communicating to the procedural program is through ERROR-STATUS, a 4 byte field, which on a STORE verb, with a record containing data validation errors should contain a 12nn status. Once all of the possible minor error codes are eliminated which may be returned by IDMS, there are only very few codes which can be used to tell the program/ user why the record was rejected. Those who are familiar with database procedures would be aware of the BIND DATABASE PROCEDURE data manipulation command, which allows for extending the communication interface between database procedures and procedural programs. This verb is not available in ADS, and its use would prove to be awkward for many of today's developers who have only a passing familiarity with COBOL.

2 Case Studies 2.1 A database procedure for row level, data content based security 2.1.1 Requirement and Functionality The Electricity Trust of South Australia (ETSA) had undertaken the in house development of a major Human Resources application. Effective management of ETSA's human resources is a distributed function, where each Division and Department head is responsible for many aspects of managing their own staff (Employees). Clearly the unit heads require access to information about Employees under their scope of control, but there should be some sensitivity to information about Employees outside of their scope.

Database records which were to be "secured" carried a "security key". In most instances this was the “foreign key” of the hierarchical "owner" of the secured record. In the classic Department Employee relationship for example, the Department Id (DEPT-ID) would be carried in the Employee record (if not already there for relational purposes). We were then able to create "access rules" which says that John Smith may look at the records of Employees in Departments 201, 202, 301 and 302. Other access rules might pertain to salary ranges, and some rules might even include reference to both DEPT-ID and Salary. The security procedure were to work the same for all procedural languages, including the adhoc reporting tools, and should work in all environments (IDMS-DC, IDMS-CV and IDMS local). 2.1.2 Alternative Solutions - Pros and Cons 2.1.2.1 Central Version Exit The CV system exits are more suitable for broader security issues, such as "is the user allowed to sign on to CV?", or "can this program READY this AREA, in this access mode?” Clearly there is also the problem of handling local mode run units. 2.1.2.2 Security Checking in LR Paths ETSA uses Logical Record Facility (LRF) to meet nearly all of its data access requirements. LRF paths could be enhanced to do the security checking. They could either "iterate" past record occurrences which the user is not meant to see, or return a DBA defined LR status keyword. Pros: Outside the hands of end users. Cannot be bypassed if subschemas are "LR ONLY”, and if REGISTRATION, and OLQ and CULPRIT security are turned on. Because the path could obtain "owner" records which contain the security key - data redundancy could be reduced (at the cost of I/0?). Cons: Requires an IDD work record in each LR which passes a security key value for the current user (i.e. the access rule). This means there is a dependency on the program developer to establish this key correctly. Depending on where the access rule is stored, this might not work for batch (eq. The ADS signon procedure makes the User's access rule available in the Application Global Record so the programmer can populate the security key in the IDD work record). The mechanism certainly wouldn't work for OLQ and Culprit, because there's no effective way of populating the security key in the IDD work record. Furthermore, the security key, or access rule, would of necessity have to be extremely simplistic - such as matching DEPT-ID, or use of a mask (2** to access all department 200 employees) or use a keyword such as 'ALL' for the General Manager of ETSA. 2.1.2.3 CALL/LINK Security Program Rejected almost out of hand because such a mechanism is too easily bypassed - by errors of omission or commission. This technique does allow for sophisticated access rules, also allowing for references to be made to other database records for the security key - or for the access rules themselves! Because of the reliance on policing, this security technique was not considered to be even a possibility +or a final solution. It also has the problem that it didn’t work for OLQ, CULPRIT, and other, similar tools.

2.1.3 The Implemented Procedure - Schema Specification and Results Obviously, a database procedure was seen as best meeting the original requirements. From a pure implementation point of view it has many of the desirable characteristics of the rejected alternatives:  Sophistication of the access rules implemented by a program  Does not rely on developers to populate security keys  Allows reference to be made to external sources for access rules  Works for all operational environments (CV, DC, local) 

Cannot be bypassed - even by CA reporting tools

Some of the characteristics of the implemented procedure are:  System level access rules identify the secured entities  User specific rules identify the values, or ranges of values which are either allowed or

disallowed for the user to access

 A User's rules may reference rules for a "Group'  Access rules may identify multiple data items on which to base the evaluation of the user's

ability to access the record

 A secured entity may have all User, individual access rules bypassed for particular programs  The User's individual rules may be bypassed for particular program  Works in all environments  Prevents Users from MODIFYing or STOREing data values which they cannot access (if

specified in the Schema).

2.1.3.1 Schema Specification ADD AREA TEST-AREA PAGE RANGE is nnnnn THRU nnnnnn CALL DBSCPX01 BEFORE FINISH CALL DBSCPX01 BEFORE COMMIT CALL DBSCPX01 BEFORE ROLLBACK ADD RECORD TEST-RECORD LOCATION MODE .... CALL DBSCPX01 BEFORE GET CALL DBSOPX01 BEFORE STORE CALL DBSCPX01 BEFORE MODIFY The AREA level specification causes some local storage for the database procedure to be released. If there were multiple run units in the same IDMS-DC task, the acquired storage would not be released and the task RLE chain would become longer - this could cause storage problems and/or RLE shortages in the IDMS-DC system.

2.1.3.2 Results The procedure is performing well in the production application. When iteration is used in LRF paths to bypass records which users are not allowed to access, the OLQ and Culprit tools continue to work unhindered for the users. Only LR path coders have to be instructed in the special status values which may be returned by, the database procedure - since native DML is not used. Should native DML ever be used by a developer, then the developer must allow for the ERROR-STATUS values from the database procedure. ETSA has developed an online application which allows the end users to develop and maintain their own access rules. Appendix A describes the access rules, and some other aspects of the technical implementation of the database procedure.

2.2 A facility for extended locking of database identities without DB Key deadlocking 2.2.1 Requirement and Functionality The Telecom Australia (now Telstra) TRAC project had some online functions which proved to be too big for actually executing online (dialogs were too big and performed too many I/O's). These functions were changed to run as ADS/Batch dialogs. The job streams to initiate the dialogs were submitted to the internal reader from the online system - and came to be known as "Immediate batch" jobs, or quasi-online jobs. The technical problem that had to be overcome was to manage concurrent access to the database records involved. Because the database design uses foreign keys quite extensively, it was necessary to "lock up" database identities, rather than just specific database record occurrences. To give an example, at the risk of over simplifying the problem, consider a database consisting of "Service Orders" and "Equipment". It is possible that some "Service Orders' may have the same primary key as the partial primary key of some pieces of "Equipment". Because of the nature of the batch and online processing it is necessary to "lock up" all of the "Equipment" entity occurrences with the same partial key, as the primary key of the owner "Service Order". 2.2.2 Alternative Solutions - Pros and Cons 2.2.2.1 KEEP LONGTERM KEEP LONGTERM is inappropriate due to KEEP LONGTERM RELEASE ALL being used elsewhere in the application. KEEP LONGTERM will only LOCK one entity type at a time, and only if the entity occurrence has been made current (eg. of run unit). We can see that because of the need to actually make a record current in order to lock it, that some processes may have to access a lot C.5-f additional records. Also, if all occurrences with the same partial primary key arc not processed - then much of this accessing and locking was for naught.

2.2.2.2 “Locking” Records in the Actual Database The database design was modified to include the following data structure, which allowed the application to lock an "identifier' - rather than the actual records themselves. An "identifier" can be considered to be the primary key of one or more entity types, and/or a partial key of other entity types.

LCKUSER

LCKDATA

The LCKDATA/LCKUSER mechanism allows applications to prevent multiple users -from concurrently accessing database entities with the same data identifier. The online application transactions would actually display a message to a user who was trying to access a record already held by another user, to the effect of: ENTITY HELD BY USER yyyy. This mechanism was actually implemented because KEEP LONGTERM was unsuitable, but it caused severe performance problems due to excessive DB key waits and deadlocks. 2.2.3 The Implemented Procedure - Schema Specification and Results It was decided that the same mechanism could be implemented in a "virtual database", thus eliminating real I/0 and database locking. The virtual record contains a data entity identifier (usually a real database record CALC key, due to redundancy often a CALC key for multiple record types), the USER ID of the user currently holding the lock, and the USER ID of the most recent user to request the lock (but the access was denied). The use of a database procedure was decided on because it had a couple significant advantages over the use of a LINKed or CALLed routine, because a database procedure: 

Could run in SYSTEM mode, reducing the impact of storage protect



Would handle all Central Version applications, including ADS/O, DC Cobol, ADS/Batch and batch COBOL



Would encapsulate the requirement in a single source and run-time entity for ease of source management and implementation

It was anticipated that the procedure would be invoked BEFORE STORE (equivalent to a LOCK), and BEFORE ERASE (equivalent to an UNLOCK). The user application program would be responsible for initialising the data areas to the required values prior to issuing the STORE/ERASE DML verbs. In the end, OBTAIN functionality also had to be added in to the procedure to allow for locating all of the locks held by a particular user - so that they could be released all at once ('for example whenever the user returned to the main menu). When the procedure determines that another user has locked a requested data identifier, it tests to see if the other user is still signed on to IDMS-DC before returning the access denied status. If the user is no longer signed on, then the lock will be reassigned to the requesting user.

2.2.3.1 Schema Specification ADD AREA TEST-AREA PAGE RANGE is nnnnn THRIJ nnnnn CALL DBLOCKAS BEFORE FINISH CALL DBLOCKAS BEFORE COMMIT CALL DBLOCKAS BEFORE ROLLBACK. ADD RECORD TEST-RECORD LOCATION MODE ..... CALL DBLOCKAS BEFORE GET CALL DBLOCKAS BEFORE STORE CALL DBLOCKAS BEFORE ERASE. 2.2.3.2 Results It was anticipated that the implementation of New South Wales into MAC would cause the system to perform well outside of the performance agreement (NSW represents about 78% of the load, which is 3 times what was running at the time). The database procedure was implemented just before NSW was added in to the system - with the result that response times remained steady, if they did not actually improve. DBKEY deadlocks were a significant problem before NSW's implementation, and thus was a serious concern. Dialogs were ABORTing at the rate of 250 per week, with most of the failures being caused by deadlock inn 29). After the procedure was implemented, and after NSW was installed, dialog ABORTS were reduced to 50-60 per week - with only 25% being caused by deadlocking (nn29).

2.3. Creating “sequential" output from IDMS with minimal overhead 2.3.1 Requirement and Functionality In the Justice environment there was a need to create a log of all enquiries into the system - the nature of the requirement being to allow authorized users to "watch the watchers". The logged data includes not only who was performing the enquiry, but who was being enquired about. Detailed identifying information about the enquiring officer and the subject of the search are logged, along with the terminal identifier, date and time, and the function used to perform the enquiry. Based on the background information for this requirement, there appeared to be 'four major areas of concern with respect to logging of enquiries:  Minimise overheads: particularly with respect to the number' of run units and physical I/O's. Enquiries for the Warrants system account for 50% of the production machine utilisation. An increase in I/O's for journaling and logging to the database had the potential for increasing capacity requirements by as much as 20% (which had not been budgeted for).  The solution should be generic: that is, the solution should be applicable to other, similar requirements, such as for "Criminal Incident' (system. Ideally, any code that's additional beyond the ADS code should be able to handle the logging requirement for any application (even outside of Police requirement).  Minimise down time: specifically in recovery situations, the need to recover logging data should not impact recovery time.

System availability: the capability should exist to continue logging enquiries while all database areas are in retrieval for backup.



2.3.2 Alternative Solutions -- Pros and Cons One of the attractive features of this assignment was the investigation into the alternative solutions - and the pros and cons 'for each. Sometimes in the heat of battle, we forget the implications of the solutions to particular problems. Just as I was rudely awakened by the implications of some of the following alternatives, I hope that some readers may find some of the "cons" in this section to be of interest. 2.3.2.1 STORE the Records in a Database This was the original solution, which had been rejected due to the overall increase in resource requirements. That's not to say that the particular implementation did not have some merit. Specifically, the logging was isolated into an INCLUDE module - which had two immediate benefits: l) there was no possibility of initiating a new run unit which might have occurred with a LINKed dialog, and 2) the code in the INCLUDE module simply needed to be "uncommented", and the affected dialogs regenerated in order to begin logging. The existence of the INCLUDE module is an advantage for all of the alternatives evaluated, since any significant changes required to implement the solution would be isolated to one process module. The main benefit of this alternative is that: the work is completely done in the ADS process language, so there would be no need -for external support (from CA or from a consultant). Working against this alternative was the fact that the existing code needed some work to optimise the database STORE in one of two ways: 1. Spread the logging across the entire area to prevent page level contention of concurrent enquiries, and 2. Develop a queuing technique which will prevent deadlocks, and which might also continually update the "current page" that is being logged to (to prevent IDMS always referencing space management pages when a DBKEY of -1 is always offered by the program). In any event - this alternative had been rejected by JIS, owing to the overheads of STOREing a record for every enquiry: 

Journaling I/O where none existed for a retrieval only run unit,



Physical database I/0 (minimum of 2 additional to each enquiry),



Potential for' DB key waits and deadlocking (STORE causes a page level lock attempting to log sequentially causes contention or the same page on every enquiry). Also, as new records put space utilisation over 70% full in a database page, the space management page (SMP) also gets updated - to reflect the current actual status of the page being updated.

Beyond the overhead incurred by this approach, there is the problem of supporting a 24 hour operation. It is virtually impossible to keep an area in update more +or 24 hours a day, 365 days of the year. This means that some enquiries would not be logged, if the logic supported "turning off" logging, or that the enquiry system has to be shut down during maintenance of the logging database area. 2.3.2.2 Logging to the IDMS-CV/DC/UCF Log The same as the first technique, but with the STORE of the log record being changed to a WRITE LOG process command. This approach had no special benefits to recommend it. Against it was the tart that: the IDMS-CV/DC/UCF log is a well-known bottleneck, as its physical I/0 is single threaded. Rather than increasing the amount of information that is written to the log, most installations try to minimise the activity against this file. 2.3.2.3 Logging to the IDMS-DB Journal Instead of a STORE of the log record in the INCLUDE module we would LINK to a COBOL program - which would contain a WRITE JOURNAL DML statement (using the NOWAIT option). After the existence of the INCLUDE module, this approach has additional advantages due to the use of the journal: 

24 hour availability, as the journaling mechanism is always active (certainly not impacted by database maintenance or by the need to put areas in retrieval for backup purposes),



Virtually unlimited amount of data, IONS automatically switches between journal files, and archives the latest full file to tape,



The journaling mechanism is a familiar tool for the DBA group at JIS, who would already be using the journals for more than IDMS recovery purposes (archived journals are used +or performance reporting for example).

Unfortunately, premature flushing of the MIS journal buffer occurs in spite of the NOWAIT option which stops IDMS from flushing the current journal buffer for update dialogs. The IDMS system generation specifies NOJOURNAL RETRIEVAL, which means that normally "retrieval only" dialogs do not incur any journal I/O. Placing a WRITE JOURNAL command into the dialog will force the system to flush the journal buffer on the FINISH (implicit in the DISPLAY command) _.the early flushing of the journal buffer under these circumstances has been confirmed with CA Technical Support. The extra I/O on the journal file is undesirable, and the fact that the journal buffer page will only have a very low utilisation, both recommend against this option (low utilisation wastes an I/0, and on the journal means that the frequency of archiving may increase - a very il0 intensive process. although not normally affecting the CV region itself). Finally, WRITE JOURNAL is not available in ADS - the ADS dialog must LINK to a COBOL program which would issue the WRITE JOURNAL. There is the potential that a developer' will specify the LINK improperly, thus inadvertently causing an additional run unit (or potentially two additional run units). This should not be a problem if logging is only done through copied (or INCLUDEd) code.

2.3.2.4 WRITE PRINTER to a UCF Printer Destination, or through a SYSOUT Line to a Sequential File The proposed include module would use a WRITE PRINTER process command. The data would be extracted from the IDMS report print queues by running the UCF batch print utility (the UCF printers are in effect "dummy" printers. to which reports cannot be printed - SO reports must be held until a special utility is run). For either technique the main advantage is that there are fewer problems in making the queue area available +or 24 hour update - than there are in making an application area available on a 24 hour basis. On the down side, IDMS-DC/UCF report lines are first written to the IDMS queue area where they are stored until the current task has completed. On task completion IDMS tries to print the report to the specified destination, at which time the report lines are deleted from the queue area. The basic reporting mechanism therefore has two lots of additional I/O, and also has additional journaling for the STOREs and ERASES of the report lines in the IDMS queue records (although queue journaling is usually restricted to BEFORE image only). Also, storing the expected volume of data in the queue area would result in this area being made extremely large to cater for holding the data until the batch UCF print utility was run (to effect an extract o+ the data to an external sequential file). In the SYSOUT case, the IDMS-DC: ne containing the printer would have to be CLOSED from time to time to allow for archiving, with IEBGENER for example. Space in the queue area is less of a problem, since the reports would actually be logically printed as soon as each task completes. Space allocation for the sequential file would be critical, as running out of space would cause an operating system x37 abnormal termination (lack of space) - which would cause the entire IDMS-CV/DC/UCF system to "fall over". If the line is closed so that the report file can be archived, then the reports will build up in the queue area - as with the UCF printers. 2.3.2.5 Writing the Logging Data Record to SMF The "trigger" mechanisms for initiating the SMF write SVC that were, investigated are: 

LINK PROGRAM to an assembler , subroutine, which would issue a supervisor call (SVC) to the CA installed SVC, passing a function code which asks the SVC to issue the SMF write command.



Code an assembler program as an ADS built-in-Function (BIF) which would extend the ADS process language. The resultant ADS process command might look like:



MOVE SMFWRITE(log-record-v) TO smf-status-v.



The system task termination exit (CV exit 05) could be used to check if the following conditions are met: o o o o

The terminating task is an ADS application, the dialog is one of a list of dialogs in an TOD table, The RBB (primary or secondary) contains an occurrence of the enquiry logging record. If all of these conditions are met then the enquiry logging record would be automatically output to SMF.

Assigning the application data records a unique SMF record type would simplify the isolation and extraction of the IDMS application data from the SMF files. Using the performance monitor SMF header as a model allows all application data in SMF to use the same SMF record type.

By including two identifying data items in a "prefix" portion of the schema records, the SMF techniques become generic solutions, suitable for use by any application regardless of the data content of the log record used by the specific applications. The data items would be; date length of the log record and, log record type (i.e. sub-type within the IDMS application data SMF type). The advantages owing to the use of SMF are: 

24 hour availability, as the SMF collection mechanism is always active (certainly not impacted by database maintenance or by the need to put areas in retrieval for backup purposes),



SMF system buffering means that no 'real I/0" occurs immediately, thereby reducing WAIT time for the write activity,



Virtually unlimited amount of data, SW: automatically switches between online files, and archives the latest full file to tape,



The SMF data collection mechanism is a familiar tool for the system programmers at JIB, who are already collecting IDMS transaction statistics with this facility,



The "SMF write" supervisor call is available as a special function within the CA/IDMS SVC (it's a default when installing the SVC 'or"' Central Version and UCF operations) this means that there is no need to run IDMS as an "authorised" program in order to use SMF.

Other advantages include flexibility; the BIF processor, like the LINKed routine, is loaded dynamically - so a simple change to the assembler program would allow JIS to turn logging on/off dynamically; similarly with the CV exit, logging can easily be turned on and off at a system level by activating or deactivating the exit, also by updating the IDD table referred to above to include/exclude additional dialogs. Some of the problems with these solutions are; if the wrong form of the LINK command is coded, an application developer may inadvertently cause additional run units to be created; the built in function solution is language specific - the BIF can only be invoked from ADS dialogs; the early steps of the exit 05 processing duplicates work already being done by performance monitor - also, the processing would be extra 'or every task termination in the system, even though the tasks do not require enquiry logging. 2.3.3 The Implemented Procedure - Schema Specification and Results The database procedure is an assembly language routine, activated by the STORE verb in ADS Process code. The database procedure cancels the actual execution of the STORE, and triggers the SMF write in its place. The run unit remains a retrieval only run unit as no actual updating of the database has occurred (thus minimising journal overheads). The considerations for the SMF record are the same as described in earlier techniques. That the run unit remains a "retrieval only" run unit (even though a STORE command has been issued by the program/dialog), and would therefore not cause journaling activity - has been confirmed with CA Technical Support. The first of the three main advantages of this technique is that this generic solution is usable by any application - and that the developer cannot inadvertently create additional, unwanted run units. A second advantage is that the database procedure can be invoked regardless o4 the procedural programming language being used, because it is implemented at a schema level - and is invoked independently of the program code that issues the STORE verb.

A third advantage is that the facility can be turned on and off by a simple schema change, and by regenerating the subschemas which contain the log record. This allows JIS to switch from lagging to SMF, to logging to the database at a moment's notice. A simple change to the database procedure itself, would allow JIS to turn lagging on/off at will - without even updating the subschemas. The main advantage of this technique is flexibility. The advantages of this technique over the other eight techniques which were investigated are;    

Least impact on 24 hour availability, Minimises physical I/0 - including wait time, a No restrictions on the amount of data logged, SMF is a familiar data collection mechanism for JTS staff, Has the greatest flexibility - with opportunities to change the specific technique used without changing the application.

2.3.3.1 Schema Specification ADD AREA TEST-AREA PAGE RANGE is nnnnn THRU nnnnnn CALL DBUTOO6P BEFORE FINISH CALL DBUTOO6P BEFORE COMMIT CALL DBUTOO6P BEFORE ROLLBACK ADD RECORD TEST-RECORD LOCATION MODE .... CALL DBUTOO6P BEFORE STORE 2.3.3.2 Results Bottom line - all of the anticipated advantages (or benefits) were achieved. This technique was so successful that it was subsequently implemented at a number of government and other user installations where there was a similar requirement to create a “log” file of one sort or another.

3. Design and Coding Considerations The most important step in preparing a database procedure is to confirm in your own mind that what you're about to do is the "right thing". Having examined the technical and application requirements, and having evaluated several alternatives - is there a. clear argument 'Liar- using a database procedure to solve the problem at hand? The second step is to write a specification for the procedure. This detailed description of the functionality must pay special attention to the programming interface which verbs will "trigger" the procedure, how is success or failure communicated to the program, what will the status of the physical database be on zero and non-zero status conditions (has the record been physically STOREd or MODIFYd?, etc).

3.1 Which Language to Write the Database Procedure With? Once the details of what the procedure will do on behalf of the application has been confirmed, then the technical design can begin. A target programming language must be chosen - depending on the requirements it may be possible to use COBOL, or there may be no recourse but to use zOS Assembler.

Assembler was used for all of the database procedures described in this paper, some of the reasons being:   

A need to identify Local mode versus Central Version environment, Run time efficiency (especially for Telecom and JIS), A requirement to access IDMS and/or Operating System control blocks (particularly for the security procedure to identify the user).

Arguments in favour of COBOL would include the availability of an example in the manual, maintainability, and the general availability of competent COBOL programmers.

3.2 Special Considerations for zOS Assembler When using zOS Assembler, database procedures should be coded using IDMS-DC linkage conventions. These are documented in the System Operations manual (Ref: 3), in the section on “User Exits". In summary the rules are:    

Specify #MOPT ENV=SYS Use #START and #RTN for entry and exit Use RGSV=(2-8) when coding macros or IDMS system services Don't use registers 9 through 13 (R9 - R13)

R9 = TCE R12 = Program Base

R10 = GSA R11 = Program variable storage R13 = Stack save area processing

If you must use these registers for other reasons, restore them to these conventions before issuing any IDMS service requests.  

Don't attempt any database processing #GETSTG, #FREESTG, #LOAD, #DELETE will work in local mode - don't use any other (DC) system services when in local mode!

3.3 Known Bug in Database Procedure Processing There is a known problem with BEFORE GET database procedures. It appears that when executing a GET DML verb, the DBMS passes the wrong schema record address in the parameter list set up for the database procedure (i.e. the wrong record occurrence is passed). There is a bypass for this problem, and that is to have the programmers use OBTAIN CURRENT recordname instead of GET.

Appendix A

Data Access Rule Representation for ETSA's Security Procedure

An individual's authority to access occurrences of a given ENTITY type is dependent upon data Values within each entity occurrence, and a set of value based rules for the individual, for the ENTITY type. No consideration is given to the function that the user is currently executing, except that when accessing data through a particular (named) program - some users will not have their usual access rules evaluated. These programs are called "unrestricted programs", which apply only to specific users. The logical data model looks like this:

USER

Unrestricted Program

Restricted Record

Authorised or Unauthorised Data Values A special “GROUP” type of USER can be defined, and Users within a GROUP may either have their own, unique set of data access rules - or they may simply inherit the GROUP’s rules. This can provide considerable time savings in terms of specifying unique rules for all USERs. The data access rules are stored in the form of load modules, for efficiency and for ease of access and processing at run time. The load modules can be hand coded using zOS Assembler language constants or can be "generated" by a program - based on rules stored in a security database. The latter approach has been used at ETSA where a complete security administration application has been built. Having the separation of "source" and "run time" rules has several benefits. The first is that for all initial testing of the security procedure, the rules can be hand coded. Thus, creating a system to establish and maintain the access rules is not a prerequisite for testing the run time system. Another benefit is that the access rule source can be updated at any time, without impacting the USER. Reports can then be run to verify that the rules have been correctly specified - then the security administrator can choose when to implement the new rules, by running the rule generator.

References Ref: 1 Database Systems: A Practical Reference By: Ian Palmer Publisher: QED Information Sciences Ref: 3 CA-IDMS System Operations Guide By: CA Publisher: CA

Ref: 2 CA-IDMS Database Administration Guide – Vol 1 By: CA Publisher: CA

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF