1. Field of the Invention
The invention relates generally to device drivers and more particularly to means for allowing a device driver to determine the system processing load of a computer on which it is executing and to select an execution context which meets latency constraints and at the same time reduces use of the interrupt context for execution.
2. Description of the Relevant Art
As a result of the increased performance of microprocessors and memory systems, there are an increasing number of opportunities to develop software applications. The software applications that have been developed to take advantage of these opportunities have included not only what might be considered more traditional software applications, but also applications which emulate or take over functions traditionally implemented in hardware. These applications utilize the microprocessor of a host computer system to perform the algorithm associated with the hardware. By taking advantage of the extra processing power available in current computer systems, software implementations can reduce the cost of the modem components in these systems.
A modem is one example of computer system component which can be implemented in software. Software modems may provide various advantages over hardware modems. Software designs may be more easily changed than hardware designs. Likewise, it may be easier to update the software designs and to provide the updates to users. Applications such as software modems, however, may present complications which do not arise from typical software applications. For instance, modem applications must operate in real-time and must operate under particular latency constraints with respect to the analog converter which is used to interface with the telephone line. Further, modem applications generally require a large portion of the processing time available in the system. The same is true for many other types of software solutions to traditional hardware problems (xe2x80x9csoft solutionsxe2x80x9d), including multimedia, telephony and data communication applications which need real-time servicing by the operating system.
As additional applications are executed, dynamic loading of the system increases. This loading may include increased use of device drivers (e.g., disk drivers.) The additional applications thus compete with the soft solutions for use of these drivers and the resources with which they are associated. At the start of execution, applications may require disk accesses to load the executable code. During subsequent execution, data may be needed by the application, resulting in further disk accesses. System loading may increase as a result of computation-intensive applications because of their high processor and data storage utilization.
Likewise, if a user adds new devices to a computing system, new driver software is also added. The additional utilization of existing and added drivers reduces the amount of processing time which is available to individual applications and drivers. In a heavily loaded system, the system""s ability to provide real-time service to each of the drivers and applications may be impaired. Therefore, applications which require a great deal of processing time, such as those which implement traditional hardware functions, may seriously degrade system performance.
Producers of the individual drivers and the soft solution applications must nevertheless rely on the operating system to provide service to this software in real time. The operating system must therefore perform with a high degree of determinism. That is, the operating system must be very predictable in terms of providing adequate processing time to support real-time performance in the many applications which may be sharing the host processor""s available processing bandwidth. This requires a mechanism by which the applications can be processed in the background while still providing sufficient service to the applications to allow real-time performance.
An operating system typically uses a process level context (also referred to as passive context or background context) and an interrupt context. The interrupt context may include several levels of priority, so that interrupts having the highest level of priority are executed first, then interrupts of the next lower level, and so on. Routines using the interrupt context are executed before routines in the process level context. Real-time routines which use these operating systems execute in the interrupt context in order to ensure that they are serviced by the operating system in a timely manner. The routines must execute in the interrupt context even if they could be adequately serviced at the process level. Because of the potentially great number of routines which may be competing for service at a given time, it would be desirable to provide a means to reduce the number of interrupt routines demanding processing time. Also, because a real-time routine executing at a particular interrupt level may be preempted by an interrupt routine executing at a higher priority level, the first routine still may not receive real-time service.
One example of a mechanism which has been employed to reduce the number of interrupt-level routines is the use of an additional context level. One operating system includes a deferred procedure call (DPC.) This is also referred to as a deferred processing context or DPC context. The DPC context has a higher level of priority than the process context, but a lower priority than the interrupt context. Execution contexts between process-level and interrupt contexts will be referred to herein as DPC contexts.
The DPC context is intended to provide a means for time-critical scheduling to take precedence over process-level tasks, yet still reduce interrupt context processing. The DPC context, however, cannot guarantee timely execution of routines. For example, the DPC mechanism of Windows NT (a trademark of Microsoft Corp.) executes routines in first-in-first-out (FIFO) order. A poorly written routine can spend an excessive amount of time executing in the DPC context, causing other DPC-scheduled routines to be delayed. As a result, the real-time performance of an application associated with one of the delayed routines can be destroyed. Therefore, because the DPC mechanism may be unreliable, it may still be necessary to execute the routine in the interrupt context.
The present invention provides a means by which drivers can select the context in which they execute. The drivers can dynamically select the context based on the current system processing load. A higher-priority context can be selected to overcome background scheduling latencies when the driver requires immediate servicing. A lower-priority context can be selected when the driver does not require immediate servicing, thereby freeing processing time for other drivers or applications which otherwise would defer to the driver.
Information relating to the system processing load is read from the computing system. This information may be latency data for routines executing in a non-interrupt context or it may be other system loading data. It is determined from this information whether or not a driver executing in a non-interrupt context will be serviced in a timely manner. If the non-interrupt context will provide timely service, the driver is scheduled to execute in this context. The non-interrupt context may be one of several such contexts. If the non-interrupt context will not provide timely service, the driver is executed in interrupt context.