1. Technical Field
This invention relates to operating systems management. In particular, this invention relates to adaptive partitioning for operating systems.
Fair-share scheduling is a scheduling strategy known in the art for operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes. For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will logically divide the available CPU cycles such that each user gets 25% of the whole (100%/4=25%). If user B starts a second process, each user will still receive 25% of the total cycles, but both of user B's processes will now use 12.5%. On the other hand, if a new user starts a process on the system, the scheduler will reapportion the available CPU cycles such that each user gets 20% of the whole (100%/5=20%). Other scheduling methods such as last-in-first-out (LIFO), round-robin scheduling, rate-monotonic scheduling, and earliest-deadline first scheduling are also known.
In a conventional fair-share scheduling system, a high priority workload response time can be low only because another lower priority workload response time is high. Low priority processes can tax a microprocessor's resources by consuming large quantities of CPU budget, which may leave little available CPU budget for processes that need to be run immediately, but are infrequently executed. In addition, untrusted applications may gain access to a CPU resource and create an infinite loop, starving other legitimate processes of their required CPU budgets. Therefore, a need exists for a scheduling strategy for an operating system that allows critical processes adequate access to system resources when needed.
2. Related Art
Fair-share scheduling is a scheduling strategy known in the art for operating systems in which the CPU usage is equally distributed among system users or groups, as opposed to equal distribution among processes. For example, if four users (A,B,C,D) are concurrently executing one process each, the scheduler will logically divide the available CPU cycles such that each user gets 25% of the whole (100%/4=25%). If user B starts a second process, each user will still receive 25% of the total cycles, but both of user B's processes will now use 12.5%. On the other hand, if a new user starts a process on the system, the scheduler will reapportion the available CPU cycles such that each user gets 20% of the whole (100%/5=20%). Other scheduling methods such as last-in-first-out (LIFO), round-robin scheduling, rate-monotonic scheduling, and earliest-deadline first scheduling are also known.
In a conventional fair-share scheduling system, a high priority workload response time can be low only because another lower priority workload response time is high. Low priority processes can tax a microprocessor's resources by consuming large quantities of CPU budget, which may leave little available CPU budget for processes that need to be run immediately, but are infrequently executed. In addition, untrusted applications may gain access to a CPU resource and create an infinite loop, starving other legitimate processes of their required CPU budgets. Therefore, a need exists for a scheduling strategy for an operating system that allows critical processes adequate access to system resources when needed.