Multi-party audio and/or video conference calls can involve participants or clients with a wide variety of preferences and capabilities. A client using a mobile phone to connect to the conference, for example, may have a low uplink and downlink bandwidth and support only low frame rate video. On the other hand, a client connecting to the conference using a desktop computer on a corporate intranet may have a high uplink and downlink bandwidth and support a high frame rate. A mobile phone client may, for example, only be able to encode and receive video at Common Intermediate Format (CIF) resolution (e.g., 320 by 240 pixels per frame) with a frame rate of 15 frames per second (fps), while the intranet client may be able to encode and play back video at Video Graphics Array (VGA) resolution (e.g., 640 by 480 pixels) with a frame rate of 30 fps. Consequently, the mobile phone client may not be able to send or receive the same quality video stream as the intranet client.
The conventional solution to the aforementioned problem involves degrading the video quality for all participating clients to a maximum level that the lowest performing client can handle. That is, the conferencing system may force a higher-capability client to compromise and sacrifice its conferencing capabilities by encoding/receiving video streams with a lower resolution and a lower frame rate as compared to a resolution and frame rate the higher-capability client can handle. Although this approach provides a system solution that supports lower-capability clients, the higher-capability clients are left with a sub-par conferencing experience that is below their abilities. Further, this approach is not optimally efficient since it leaves a certain amount of processing power and bandwidth unused.