Input/Output (I/O) operations refer to read or write operations that are performed by a processing entity on a storage device, as an example. Generally, such operations are scheduled by operating systems executing I/O scheduling algorithms in order to decide on the sequence and the order of the I/O operations to be submitted to the storage devices for processing thereof. In order to minimize time wasted by a storage device, such prior art processes seek to prioritize certain processes' I/O requests, as an example to minimize physical arm movement in hard disk drives (HDDs), for sharing disk bandwidth with other processes, and to guarantee that certain requests will be issued before a particular deadline. This process typically results in a generation of an I/O operations queue, which queue defines the order of transmission of the I/O operations to the disk subsystem which includes storage devices (such as HDDs, solid-state drives (SDDs), and the like).
Generally, storage devices operate their own device-internal logic for selectively deciding in which order operations in a queue of I/O operations should be executed, and the device-internal logic may chose the most beneficial I/O operations for execution at each iteration, as an example to optimize throughput of the storage device, or due to other reasons. Thus, the storage device may keep “skipping” the execution of “less” beneficial I/O operations (as seen by the device-internal logic of the storage device) in the scheduled sequence of operations. Thus, the less beneficial I/O operations to execute in the scheduled sequence of the storage device may remain unprocessed in the storage device for a long time. For example, at each iteration, the device-internal logic may select the most beneficial I/O operation and, due to the constant maintenance of the sequence, the less beneficial I/O operation may simply be “stuck” or “starved” in that sequence for a while due to significant disordering. This may result in an execution deadline associated with the starved I/O operation to be missed.
U.S. Pat. No. 8,117,621 B2 granted on Feb. 14, 2012 to International Business Machines Corp and titled “Simulating a multi-queue scheduler using a single queue on a processor” teaches a method and system for scheduling tasks on a processor, the tasks being scheduled by an operating system to run on the processor in a predetermined order, the method comprising identifying and creating task groups of all related tasks; assigning the tasks in the task groups into a single common run-queue; selecting a task at the start of the run-queue; determining if the task at the start of the run-queue is eligible to be run based on a pre-defined timeslice allocated and on the presence of older starving tasks on the runqueue; executing the task in the pre-defined time slice; associating a starving status to all unexecuted tasks and running all until all tasks in the run-queue complete execution and the run-queue become empty.
United States Patent Application Publication No. 2017/0031713 A1 published on Feb. 2, 2017 to Arm Ltd. and titled “Task scheduling” teaches an apparatus comprising scheduling circuitry, which selects a task as a selected task to be performed from a plurality of queued tasks, each having an associated priority, in dependence on the associated priority of each queued task. Escalating circuitry increases the associated priority of each of the plurality of queued tasks after a period of time. The plurality of queued tasks comprises a time-sensitive task having an associated deadline and in response to the associated deadline being reached, the scheduling circuitry selects the time-sensitive task as the selected task to be performed.
U.S. Pat. No. 9,639,396 granted on May 2, 2017 to NXP USA Inc and titled “Starvation control in a data processing system” teaches a data processing system including a main list of tasks, main scheduling scheme, a starvation list of tasks, and a secondary scheduling scheme. A method identifies tasks in the main list that are potentially-starving tasks and places the potentially-starving tasks in the starvation list. A starvation monitor controls starvation of tasks in the system by determining when to use the secondary scheduling scheme to schedule, for execution on a CPU, a highest priority task in the starvation list prior to scheduling, pursuant to the main scheduling scheme, other tasks in the main list. The starvation monitor determines a number of times that a task in the main list is pre-empted, by other tasks in the main list, from being scheduled for execution on the CPU. A counter is incremented each occasion that any task not in the starvation list is executed on the CPU.