High performance computing is a term of art in which clusters of servers are used to perform complex processing. Such clusters may even be as large as hundreds or even thousands of servers. Each of such servers may have multiple processing cores. Often, very complex computational jobs may be performed using such clusters. Despite the cluster's ability to perform a large number of processing operations per second, it may perhaps still take a matter of minutes, hours, days, weeks, or even months to solve some computational jobs. Furthermore, processing jobs may be submitted for processing at a much faster rate than the cluster is capable of performing them.
In addition, some processing jobs may be of higher priority than others. Conventional clusters are equipped with schedulers that allow for preemption of lower priority processing jobs by higher priority processing jobs. As the scheduler performs a scheduling pass, the scheduler evaluates queued processing jobs to see if there are any higher priority queued jobs that should preempt a lower priority running job. If there are such higher priority queued jobs, then the lower priority running job that is to be preempted may be caused to stop, freeing up resources for the higher priority queued job. The cluster may then begin processing the higher priority queued job. This improves the chance that higher priority processing jobs will be more quickly begun and completed, as compared to lower priority processing jobs.