In a typical cloud based computing environment, a server may assign workloads to compute nodes in a network to perform services on behalf of a client. It is in the interest of the entity operating the cloud environment to maximize the resource utilization in each compute node to enable the cloud environment to provide the most performance with the available hardware. However, a resource (e.g., a component, such as a processor, memory, communication circuitry, etc.) available in a compute node may become overloaded if the server assigns multiple workloads that rely heavily that resource. As a result, the overall performance of the compute node may be adversely affected, even when the other resources in the compute node are nearly idle. A further challenge is that a workload may not consistently have the same resource utilization and may instead vary over time, making heavy use of one resource, and then transitioning to making heavy use of another resource. As such, it may be difficult for an administrator or server to determine an assignment of workloads among the compute nodes that consistently provides high resource utilization without overloading any of the resources.