An interrupt is a transitory event or signal that interrupts a process or program currently running on a processor, and which can be asserted by the hardware and/or software of a computer system. When an interrupt is asserted, it causes the processor to temporarily suspend the execution of all processes that have a lower priority and immediately begin executing an interrupt service routine to carry out the processes associated with the interrupt. The lower priority processes are suspended in such a way that they may be resumed after the processes associated with the interrupt are completed. For example, an interrupt is asserted by a pointing device such as mouse when a user employs the device to select an icon in a graphical display of a computer system. Once the interrupt is asserted, the system's processor will immediately suspend every currently running process that has a lower priority than the interrupt and begin executing the routines that enable the selection of the icon with the pointing device. After the routines associated with the interrupt are completed, the processor will immediately resume the execution of the temporarily suspended processes.
When developing control programs that operate in real time, a software programmer must take into consideration the time period (latency) that will be required for processing interrupts. Specifically, interrupt latency is the delay between the time an interrupt is asserted and the time that execution of an interrupt service routine for the interrupt begins. Also of interest may be the time required to complete the interrupt service routines. The worst case latency time is usually determined by counting the number of instruction cycles that would be employed by an ideal system to process all actions associated with known interrupts. So long as the interrupt latency does not exceed this worst case time period, the real time program should be able to operate a machine or assembly line as intended by the programmer. However, if the latency period is longer than the worst case time contemplated by the programmer, the machines or processes being controlled by the program may fail because the processor in the controlling computer will not be available when required to properly maintain control of the machines or process.
Certain processes will have a higher priority than an asserted interrupt and will thus not be affected by interrupts with a lower priority. But, the design of a real time control program may be adversely affected by an extended interrupt latency resulting from the addition of a new program or hardware that increases the number of relatively higher priority interrupts that must be executed by the processor. The additional time required for processing interrupts due to the addition of a new program or hardware will not be readily apparent. To avoid causing problems with a real time control process, it would be highly desirable to provide a monitor that determines the actual amount of time for a real time computer system to execute each interrupt and to provide a warning if the maximum time permissible for proper execution of the control process is exceeded. Such a monitor should carry out this function with minimum effect on the processes that are being executed in response to any interrupt.
A real time system can develop software "bugs" that degrade its functionality when the amount of time for processing a particular interrupt exceeds a predetermined time interval. For example, a communication link can suffer timing problems that cause a loss of transmitted data, or a patient monitoring system coupled to a patient in intensive care may fail due to excessive interrupt latency, or a monitor for a manufacturing process can lag in providing the actual values associated with the current state of the process. It is well known that determining the cause of a software bug induced by a logical error in a program is relatively simple when compared to finding a bug caused by a time lag in the processing of an interrupt. Also, finding a software bug caused by an intermittent or variable interrupt latency time is even more difficult when the latency has a low frequency of incidence. Since determining whether software bugs are caused by logical errors or latencies in the processing of interrupts has proven to be difficult, there is clearly a need for an inexpensive continuous interrupt monitor that indicates the actual latency in processing each interrupt and provides an indication when the latency has exceeded a predetermined value.
One solution to this problem that exists in the prior art has been to couple an external hardware device, such as a logic analyzer, to a real time system for determining the actual latency of each interrupt. However, a logic analyzer can cost as much as $40,000. Also, a logic analyzer must be continuously coupled to the computer system through an external link, which adds to the inconvenience of employing the analyzer. Thus, the high cost and inconvenience of using a logic analyzer to determine the actual latency of interrupts in a real time system has limited its widespread use for this purpose.
Another prior art solution to this problem has been to employ an internal watchdog timer to determine when the latency of an interrupt has exceeded a maximum value. Typically, the watchdog timer is set to the maximum value when an interrupt is asserted, and immediately begins counting down to a zero value. If the watchdog timer reaches the zero value before all of the actions associated with the interrupt have been processed, an alarm signal is produced. Furthermore, some types of watchdog timers require some processor cycles, which distorts the monitoring of the latency time of an interrupt. Since a typical watchdog timer does not provide for tracking and storing the latency times for each interrupt, the user of a real time system employing a watchdog timer cannot determine if the latency has changed when new hardware or a new process is added to the computer system. Thus, there is a need for an inexpensive apparatus that can monitor, store, and indicate the actual latency for each different interrupt on the system, without significantly increasing the computational overhead on the processor.