A client-server model is commonly employed to execute computing applications. In such a model, tasks of an application are typically distributed between a server and a client in accordance with a predetermined, static partition that is defined prior to runtime. The client may perform certain tasks using resources of a client device on which the client is installed and may offload the performance of other tasks by requesting that the server perform the other tasks using resources of a server device on which the server is installed. It is common for computationally intensive tasks to be assigned to the server to utilize the computing resources of the server device, which resources are typically greater than the resources of the client device.
However, latency of data communications between the client and the server in a traditional client-server architecture, such as a traditional cloud-service architecture, may be too great to allow latency-sensitive tasks of an application to be assigned to the server. In addition, the limited resources of the client device may preclude the client from performing latency-sensitive tasks that are computationally intensive. Moreover, scaling a traditional client-server architecture to virtualize client device compute is linear and therefore not economical.