In communication systems, the transmission system has to make decisions about which users to schedule, and with what rates. Such decisions are made by a scheduling algorithm within the system, and transmissions in line with such decisions are formed by the multi-user transmission system, e.g. by choice of the MU-MIMO signaling.
However, a major problem in most existing systems is the fact that the maximum possible rate that can be delivered to the user in each scheduling cycle is often not known exactly. For example, in a wireless system the signal, noise and interference levels as seen by each user in a scheduling cycle are all important in being able to determine the rate that can be reliably sent to this receiving user. Unfortunately in many cellular systems such levels are not known exactly at the transmitting party at the time of transmission.
As an example, the level of inter-cell or inter-cluster interference (ICI) level varies from scheduling cycle to scheduling cycle and is often unknown for at least some scheduling cycles. Furthermore, ICI levels can often be greater than the useful-signal level for many users. Unknown ICI and other sources of uncertainty in, the system related to predicting user rates can have a significant effect on the efficiency of both the scheduling as well as MU-MIMO downlink signaling schemes since such schemes depend on such predictions.
The uncertainty in the rate the channel can support by nature necessitates the use of an error recovery mechanism, such as ARQ. This is because the transmitted rate to a user may not be supportable by the instantaneous channel a user is experiencing.
As further background, it should be noted that user-scheduling, ARQ and MU-MIMO methods, are used in many state of the art multi-user systems, and are individually well known to those familiar with the state of the art.
In many state of the art systems each base-station cluster controller (or base-station) has a number of UTs to serve. For each scheduling cycle, the scheduler in the system chooses the UTs to schedule and the power and rate allocation to such UTs. Such decisions are often based on estimates of the instantaneous channels, or quality of instantaneous channels, between the cluster and its UTs. Exploiting the variation in such channels, e.g. scheduling users in scheduling cycles when such users have very good channels, can improve overall system performance. This variation is thus exploited by the scheduler.
In trivial systems, e.g. those that simply maximize per cell throughput, the criterion directing the operation of the scheduler can often lead to a case where users are not served fairly. For example, for a criterion to maximize per cell throughput, it often makes sense to serve users near base-stations, starving as a result those users at cell edges. Thus, user scheduling is also an essential component in many systems to not only increase performance by exploiting channel variations, but to also ensure user fairness with respect to a chosen criterion.
There are many such scheduling methods to do so, known to those familiar with the state of the art. One such example is a “Proportionally Fair Scheduler (PFS)” which tries to maximize the geometric mean of the per user average throughput. Another is a “MaxMin” scheduler which tries to maximize the minimum per user average rate. Many schedulers, PFS and MaxMIn being two such examples, can be implemented by maximizing a weighted sum rate criterion. The weights are linked intimately to the criterion. In one commonly used scheduling weight generation mechanism for PFS, the weights are set as the inverse of the average throughput of each user, which is estimated in a loop with the physical layer parameters. An alternative method for generating the user-specific scheduling weights (and can be used for both MaxMin and PFS as well as a very broad range of other scheduling criteria) relies on the use of virtual queues. These procedures are well known to those familiar with the state of the art.
Effective operation of such schedulers depends often on being able to predict what rates can be delivered in each scheduling cycle to each user, if the user was scheduled in the scheduling cycle. The rates that can be delivered in each case depend on knowing the quality of the overall instantaneous channel at each UT. There are many well known methods to do so. The rate depends on, among other things, the power or covariance of the combined interference caused by all neighboring cluster transmissions. As noted earlier, this interference is referred to as intercluster interference (ICI).
The ICI power level experienced by a user depends on the instantaneous channels between the UT and neighboring cluster antennas, and the MU-MIMO precoded signals transmitted by these clusters. As a result, in principle, the instantaneous ICI experienced by each UT is not known to its cluster at the time of scheduling and transmission. This is because, by definition, clusters do not fully coordinate.
Other related uncertainties in quantities, such as partial channel-state-information (CSI) regarding the channel between each UT and its cluster, yield similar uncertainties in determining the transmission rates that can be supported by each UT instantaneous channel.
By definition, the uncertainties in the predicted user rates lead to losses in performance since the scheduler is not able to make the perfect decisions for each scheduling cycle. For example, a transmission to a user may fail if the transmission rate to the user is set higher than what the instantaneous channel can support. In such cases, outages in transmissions occur and the system can waste all or part of the resource that was available in the scheduling cycle.
One way to reduce such losses is to reduce the probability of such outages. A simple way to do so is to lower the transmission rate to users such that the probability of such an outage event is at an acceptable level. However, this often means the system operates conservatively and well below the potential throughput it can deliver. This is particularly true for edge users whose average possible rate in a scheduling cycle is often far less than larger instantaneous rates they could obtain when channels to such users are known and are instantaneously favorable.
Another way is to design the system such that such uncertainties are minimized. Reducing uncertainty can for example be achieved by techniques such as:                Increasing the frequency reuse factor, which lowers ICI levels so that their effect, even with variation has a small effect on rate. However, increased reuse factors means less available resources per cluster (or cell) for transmission, and often an overall loss in performance.        Reducing the variation in scheduling and transmission signals from scheduling cycle to scheduling cycle so that clusters can estimate ICI. This demonstrates the interplay between scheduling and losses. However, since such methods constrain the scheduler, system efficiency can be reduced. These methods also do not address, directly, the issue of how to link the error recovery mechanism to the scheduling, except to say it makes the job of the scheduler somewhat easier (but not easy).        
Reduction in uncertainties can reduce inefficiencies. However, once uncertainties exist about potential deliverable rates, outage based approaches will not be able to fully use all available transmission resources as failed transmissions represent unused transmission opportunities.
In contrast to such methods, one can consider ARQ methods. These are a class of methods suited for settings where outage rates are high. Such methods are based on exploiting successful decoding acknowledgements sent by each UT to its cluster via a low-rate feedback channel. One particular class of such methods are known as Hybrid ARQ methods. These not only exploit feedback but are also able to reuse past unsuccessful transmissions to help in the decoding of future transmissions. In this way, the waste from “outages” is minimized.
For example, take the case where a single user is served within a sequence of scheduling cycles and the instantaneous deliverable rate to the user fluctuates within the scheduling interval. If each scheduling cycle transmission is independent then the system operates with outage where some transmissions are often lost and some possibly received.
A Hybrid-ARQ (H-ARQ) scheme has dependencies between scheduling cycle transmissions such transmissions help each other. For example, those scheduling cycles experiencing times of low possible delivered rates can be helped by other transmissions where channels are more favorable. In fact, a properly designed H-ARQ system can deliver rates close to the maximum sum throughput (the sum of all instantaneous deliverable rates over all scheduling cycles) that can be supported by the channel over the scheduling interval, albeit at a cost of decoding delay for various information bits. In general, the higher the decoding delay that can be tolerated by the user (application), the closer the delivered user throughput can be made to its maximum possible level. In the context of a user-specific (e.g., application-based) decoding delay constraint, one would want to have the H-ARQ system designed (i.e., its parameters optimized) in such a way that it maximizes the rate delivered to the user among all systems that do not violate the decoding delay constraint.
In the systems of interest, however, involving MU-MIMO transmission, with scheduling criteria, such as proportional fair scheduling (PFS), a systematic design and optimization of the UT-ARQ parameters is nontrivial. Indeed, varying the ARQ parameters of a given UT affects the rates/decoding delay trade-offs of the given UT but also those of all other UTs in the cluster. This is because a change in the ARQ parameters of a UT also affects the scheduling algorithm (e.g., the PFS weights in case a PFS criterion is used). Therefore, ARQ parameters affect the rates and users scheduled, the effective system throughput, scheduling activity fraction, decoding delays, etc. of all UTs in the cluster.