The cloud computing environment is an enhancement to the predecessor grid environment, whereby multiple grids and other computation resources may be further abstracted by a cloud layer, thus making disparate devices appear to an end-user/consumer as a single pool of seamless resources. These resources may include such things as physical or logical compute engines, servers and devices, device memory, storage devices, etc.
Current systems for managing service demand load relative to infrastructure capacity in a networked (e.g. cloud) computing environment rely on task prioritization, with tasks having a higher priority receiving a relatively greater portion of available resources, and tasks having a lower priority receiving a relatively smaller portion of available resources. These systems do not take into account actual timeliness requirements for consumer initiated workload or tasks, wherein some tasks have critical and/or short term deadlines for completion, and other tasks can be completed over a longer period of time as resources become available at a lower cost. Not taking actual timeliness requirements for the completion of computing workload or tasks into account may require significant capital investment in additional capacity to handle peak demand loads on networked (e.g. cloud) computing systems.