In a cloud computing environment, multiple virtual datacenters (VDCs) can be implemented using physical devices, such as host computers and storage devices, which can support virtual machines (VMs), and applications executed by those VMs. A VDC is an example of a resource pool (RP), which is a logical container representing an aggregate resource allocation for a collection of VMs. A single VDC may support multiple RPs. Resource management techniques for VDCs are important to ensure that applications running on the VDCs are operating at their service level objectives (SLOs). Existing resource allocation techniques offer powerful resource control primitives, such as reservations, limits and shares, which can be set at a VM level or at an RP level (including VDCs) to ensure SLOs of applications running on the VDCs are being met.
These resource control primitives allow administrators to control the absolute and relative amount of resources consumed by a VM or an RP (including a VDC). However, determining the right settings of the resource control primitives can be extremely challenging due to various factors. As an example, different VMs supporting the same application may require different amount of resources to meet the application performance targets. In addition, an application running on a VDC may have time-varying demands, which means that resource control settings determined for one period of time may become ineffective at a later period of time. Thus, setting the resource controls for multiple RPs such that applications running on the RPs get enough resources to meet their respective SLOs becomes a nearly insoluble task.