Previously, using programs could only request the services of a shared hardware facility from the OS, which provided a common interface for use by all requests from the using programs. The OS interface was required to protect the integrity of data in the computer system by isolating the independent uses from each other. That OS interface is eliminated by this invention, which instead, allows the OS to set up a tailored "special environment" for each using program wanting tic use a shared hardware facility. The "special environment" is set up by the OS in response to a using program requesting use of a shared facility. The OS sets up each "special environment" with specific restrictions on the using program's direct interface to the services and passes these restrictions to the hardware facility which stores them as a check on each individual request received from a using program. It is these specific restrictions that enable the using program to have direct use of the requested facility and still key it isolated from the other independent users of it, maintaining system data integrity despite the shared use of the facility. These restrictions are not changeable by the using program. Under the umbrella of this "special environment", the using program can issue any number of direct operational requests to the facility without making the requests to the OS as an intermediary, which has the advantage of avoiding the OS service call and resulting interruptions to the using program for such OS services. Accordingly, each "special environment" is tailored to be different for the different using programs. The "special environment" continues for each using program until ended by the using program, or by occurrence of special conditions. More than one special environment may exist concurrently for shared concurrent use of a hardware facility.
Asynchronous facilities that operate in conjunction with a central processing unit (CPU) in the past have required an application program, or programming subsystem, to create the entire command to be processed prior to invocation of the facility. For example, the Asynchronous Co-processor Data-Mover, described in U.S. Pat. No. 5,442,802 referenced herein, requires that the data to be moved and the addresses involved in the move be known, and be structured for use, prior to the operating system call for each request for facility services by an application program. In fact, the complete set of address and data specification is passed to an operating system interface service as part of the call parameters for each request. These parameters are then checked by the operating system interface service, which constructs a command block, invokes the operation, and eventually accepts an interruption signal from the facility when the operation completes.
Because of a large number of programming steps required by the prior OS interface service, e.g. for the checking and construction of the co-hand block, in issuing the I/O instruction for each request, in later taking the interruption for each request, in processing the completion of each request, and finally in providing notification back to the using application program or subsystem for each, it was not economically viable for the prior Asynchronous Co-processor Data-Mover Facility to handle requests to move small amounts of data. The economic break-even point for this prior asynchronous facility is model-dependent and will change over time, and an example of it is a high-end currently available IBM System/390 Central Electronic Complex (CEC) on which an application request must move at least 64 Kilo-bytes of data per request to obtain an economic advantage. Below this data size, it was more efficient to move the data via CPU synchronous means, such as the S/390 MovePage instruction described in U.S. Pat. No. 5,237,668.
The economic impact of the subject invention is to greatly lower this economic break-even point and allow much smaller amounts of data to be economically moved in a single operation without changing the asynchronous hardware facility.
In that prior shared asynchronous hardware facility, a major aspect of the overhead of each operation was the use of the operating system (OS) as an intermediary in each using program request for that facility. This was necessary where the use of one or more asynchronous hardware facilities was to be shared among various independent using programs. The OS maintained system integrity by checking the validity of the parameters supplied to the facility in each operational request for each using program (e.g. application program), and buffered the shared facilities from the programs by maintaining necessary queues, etc. An example of one type of asynchronous shared hardware facility is described in U.S. Pat. No. 5,377,337 incorporated herein. Thus, each use of the facility required a discrete package of specific operations to be performed. The OS interface must validity-check the requested operation addresses, package them into the proper format for the facility invocation, find a free facility for use or queue the request for later use, and issue the request. Later, the facility notifies the OS of request completion, which in turn is communicated by the OS to the requesting using program.
However, there is another class of applications which, either by design or data-generation characteristics, do not have the ability to block data in large chunks without incurring significant processing penalties. These applications (jobs) are composed of multiple program segments (job steps) which execute sequentially, each of which processes a very large file of data. The data to be processed are input to the first job step which completely processes the data before passing the resulting data to the second job step for additional processing, and then the resulting data may be processed by a third job step, and so forth until the final results are produced by the last job step. Each job step runs to completion before the subsequent job step is be initiated. Historically, complex business problems were solved by this style of batch programming. The data is retrieved from a file on an I/O device and the modified results are returned to another I/O device, from which the next job step in the job accesses the data. Thus, between each job step there exists an intermediate file on some I/O device.
Because of the multiple processors and large electronic storage of modern computers, it is possible to restructure this type of large business problem such that program segments (job steps) execute concurrently instead of running consecutively. The data passing between the program segments can be moved from storage to storage in blocks of records such as was used in I/O operations used previously. The data need not be written to any temporary I/O storage device, but transferred directly in electronic storage. For example, to obtain parallel execution of the separate programs of such a complex programmed business process, the data should pass from the output buffer of one job step to the input buffer of the next job step, with possible intermediate in-storage buffering by a system service to obtain proper logical synchronization of execution of the two programs.
The transfer out of the output buffer of one program into the input buffer of the other program occurs at the same logical point in each program's execution as I/O would have occurred in each, in the original mode of operation. The OS interface service can perform transparently, managing storage-to-storage data movement as a replacement for I/O. The application programs need not change to obtain this improved operation efficiency, which reduces the overall time of execution of the entire process dramatically in this example, by avoiding the time of input-output operations.
There are other functions that can be performed (to improve overall system performance) by an asynchronous hardware facility which can be instructed to perform its operations on a small amount of data per request. In another example, in an interactive query processing environment with data records stored in compressed form where a compress/expand asynchronous facility is provided in the CEC, it would be desirable to expand only the data records that are relevant to a particular query, or the subset of the records for the query that are currently available in electronic storage. Also, where a complex query is performed by a repetitive process of input of records and computation on the records read, because of the large amount of data that must be examined, the asynchronous facility can be directly requested to expand the records in the I/O buffer after each read operation. It is impractical in many cases to have all or most of the data records in electronic storage at any one time.