Currently handling events requires significant software processing that could instead be used to execute the application. This software processing also leads to wide variation in the latency of event handling.
FIG. 1 illustrates, generally at 100, a current version of an operating system handling an event. At 110 is a timeline for interconnect hardware, at 120 a timeline for the operating system (OS), and at 140 the timeline for the application. These timelines (110, 120, and 140) are all occurring concurrently at the same time (horizontally), with time progressing vertically from the top to the bottom of the page. So for example, the application is proceeding to execute, and at a point in time 142 the application finds it needs to wait on an event in order to continue the application, so the application contacts via 142 the operating system (OS) 120 which puts the application into a wait queue 122 where it sits. All these operations by the OS are performed by a processor that is not working on the application by rather managing the wait queue. At some point later in time interconnect hardware 120 signals an event via 112 to the OS. An event could be a “message”, a distinct event, or an error condition. At that point in time 124 we interrupt the OS and pass the event to the handler process. At 126 the handler associates the correct event with application and then moves the application to the “Ready” queue for eventual processing. Some time then passes 128 until the scheduler runs. At 130 the scheduler chooses the application from Ready queue and via 132 the application resumes processing at 144.
FIG. 2 illustrates, generally at 200, a current version of an operating system showing three context switches 212, 234, 244, and an interrupt 227. At 202 is a timeline progressing from the left to the right. At 210 application A 210 is waiting on an event signaled via 212 to the kernel where at 214 the kernel adds the process to a queue and then schedules application B which is signaled via 216 to being application B processing 220. 212 indicates a context switch—that is switching from application A to application B. Application B 220 at some point is interrupted by an interrupt 222 which signal the kernel and at 224 the kernel adds the event to the processing queue and makes the event process “ready”. That is application B was processing fine and was not waiting on any event but was rather interrupted and so it can proceed to execute without any dependencies and so is marked ready. At 226 when the interrupt processing is completed (return from interrupt), application B 220 continues execution. At 227 is indicated the interrupt sequence (start, processing, and return). Some time later application B 220 is either completed or is dependent on an event and so via 228 signals the kernel, and at 230 the kernel adds the process to a queue and schedules the events process. At 232 the kernel signal an events process which at 236 makes the application A ready to run, which is indicated via 238 to schedule application A 240. At 242 the schedule is activates and at 246 application A processes the event (which it was waiting on at 212). At 234 is indicated the second context switch, and at 244 the third context switch.
It is not uncommon for a processor to have thousands of events, or interrupts per second, and thus the operating system handling of events incurs a high processing cost, high latency, and an unpredictable latency.
This presents a technical problem for which a technical solution using a technical means is needed.