In a conventional centralized cloud environment, all computing is typically executed in a single large centralized data center. In contrast, a distributed cloud comprises a potentially high number of geographically dispersed data centers instead of only one central data center. These geographically dispersed data centers have different capabilities; some of the data centers may be relatively small and be located at the edge of a network comprising the distributed cloud environment, whereas others may be located at the core of the network and have a very high capacity.
Traditionally, Unified Communications (UC) services, such as multiparty audio and video conferencing, have been provided using dedicated server hardware and Digital Signal Processors (DSPs). Today, there is an increasing trend to migrate hardware-based UC solutions to a fully software-based cloud environment. The first step in this migration is to provide software-based UC services in a centralized cloud environment. The next foreseen step is to provide them in a distributed cloud environment.
FIG. 1 illustrates a simple example of a distributed cloud environment, in the following also referred to as network 1. In the figure, a distributed cloud 2 provides a video conference service for three users A, B, C. Media processing is distributed in the cloud 2 in such a way that each of the users A, B, C is being served by a local Media Resource Function Processor (MRFP) instance 3A, 3B, 3C located close to the user A, B, C at the edge of the network 1. Further, processing such as audio mixing and switching for the video conference is being handled by an MRFP 3 in a large data center at the core of the network 1. Each MRFP instance 3A, 3B, 3C is running in a virtual machine within a respective data center.
A reason for distributing media processing to several virtual machines (i.e. a chain of virtual machines) is that the capacity of a single virtual machine is typically not sufficient for handling the media processing for all the users in a conference. This is especially true for example in a high definition video conference where users are using different codecs and wherein transcoding thus is required.
It is beneficial to distribute the media processing to virtual machines in different data centers since latencies can be minimized and responsiveness maximized when media processing occurs as close to the conference participants as possible. Latencies need to be minimized to improve the quality of the service as experienced by the users.
A distributed cloud environment may thus comprise multiple heterogeneous and geographically dispersed data centers. However, there are also drawbacks of using such distributed cloud environments, and fully software based media processing in the distributed cloud environment faces several challenges. Any virtual machine in the chain of virtual machines that contributes to the distributed processing of media streams in an audio or video call or conference may become overloaded. This may result in high latencies and jitter. The transition network interface→hypervisor→virtual machine at each physical server may result in a high amount of jitter if the physical server is running multiple virtual machines in parallel, which is typically the case. Further, virtual machines as well as physical servers may crash. A data center may lose its connectivity and become unavailable. Further still, the network links connecting the data centers may become congested, which may result in high packet loss, jitter and latencies, among other things.
From the above it is clear that the end-to-end delay, jitter and packet loss of media packets flowing through the distributed cloud can be highly unpredictable. As a result, operators of the network 1 may encounter difficulties in providing an agreed-upon quality of service to their users.