Communications systems provide an infrastructure allowing different users to send and receive data across communication resources, for example dedicated frequency bands of cable or wireless transmission links. Digital data communication systems allow sharing of a resource between a plurality of users and enable multiplexed data traffic using multiple logical digital channels. However, resources have limited capacity due to limited available bandwidth, attenuation, noise, transmission delay etc. In order to allow a maximum number of users to receive a maximum possible or guaranteed quality of service (bandwidth, delay, maximum bit error rate, etc.), the overall system throughput should be maximised. To maximise system throughput, sometimes it is desired to use channel condition to prioritise traffic flows. That is, a user in a favourable channel condition may be scheduled in preference to the one in less favourable channel condition or may have more traffic to be scheduled out. For example, if two users at the same static priority (static priority is the priority decided by pre-specified parameters, such as Quality of Service (QoS) requirements, channel capacity, and so on), but with different modulation schemes, one on QAM64 (Quadrature Amplitude Modulation, 64 modulation status (6 bits)) and the other on BPSK (Binary Phase Shift Keying), a better throughput can be achieved for the QAM64 user by either scheduling the user first (channel condition dependent prioritisation) or granting more traffic allowance (channel condition dependent bandwidth allocation) to the user. The term “QoS” does not only refer to the actual quality of the provided service, but it also refers to a resource reservation control mechanism allowing a user to obtain a certain service quality from the system.
For non-GBR traffic (GBR: guaranteed bit rate), channel condition dependent prioritisation increases system throughput because those traffic flows with poor channel condition can be de-prioritised. This does not affect their fairness, since only a best effort service is offered. However, for GBR traffic, the de-prioritisation may conflict with fairness requirement. On the other hand, channel condition dependent bandwidth allocation seems to balance GBR traffic's prioritisation and fairness.
For Guaranteed Bit Rate (GBR) traffic, customers pay for QoS (bit rate, latency, jitter, etc.) and they have no idea of or simply don't care about channel condition. De-prioritising a customer with less efficient channel condition may not only affect average bit-rate but does have an impact on latency and jitter. The impact on GBR traffic with latency requirement is explained with the following example: For example, users U1 and U2 pay the same QoS premium and, in a particular transmission time interval (TTI), U1 uses quadrature phase shift keying (QPSK) due to poor channel condition while U2 uses QAM64 for a good channel condition. It is assumed that they have the same amount of traffic to transport, say, 100 bytes. It is also assumed the capacity of a Resource Block (RB) as follows: RB[QPSK]=20 bytes, RB[QAM64]=80 bytes. To schedule 100-byte SDU (service data unit) out, say, a 110-byte MAC PDU (MAC: media access control, PDU: protocol data unit) has to be built. As a result, to hold a 110-byte MAC PDU, U1 needs 6 RBs while U2 only needs 2. By channel condition dependent prioritisation, U2 shall be scheduled ahead of U1 even if it is U1's turn under “static” prioritisation. This decision increases system throughput but it may be unfair to U1 for several reasons. The first is that U1 pays the same QoS premium for the GBR service as U2 does and U1 expects the same service. The second is that U1's traffic may get more delays and in turn more violations on latency and jitter requirements. The third is that U1's GBR may not be guaranteed due to the delay. One may argue that by serving U2 first there is still chance to have U1 served in the same TTI. One may also point out that the average GBR will be honoured by increasing U1′s bit rate credit in next TTI and the latency and jitter requirements can be achieved by constraining channel condition dependent prioritisation. However, the main point here is that if U1 gets delayed, its not fair in several accounts while if U1 does not get delayed channel condition dependent prioritisation is not needed since U1 gets served in the TTI anyway. By postponing the service to U1 in the same TTI, it won't increase average system throughput but make scheduling more complicated.
For generic GBR traffic, if there is unlimited buffer space, then the traffic with poor channel condition can be postponed. However, if buffer size is limited, then GBR traffic with arrival bit rate less than GBR will not be guaranteed if a less efficient MCS (Modulation and Coding Scheme which is selected based on channel condition) is applied. This can be shown in the following numerical example. Say, UE1 (UE: User Equipment, such as a mobile phone) is far away from a base station with QPSK. Its GBR is, say, converted into 12 bytes per TTI (96 bits per ms=96 kbps). UE1 of course expects that a 10-byte per TTI (80 kbps) traffic shall get through. If the system provides UE1 with unlimited buffer space, the 10-byte per TTI traffic will get through in some time. However, if, say, the system provides maximum 100K bytes buffer for UE1, in 10 seconds, the buffer will be full and packets will get dropped. This contradicts the GBR service.
For non-GBR traffic, channel condition prioritisation can be used to increase system throughput. However, for GBR traffic, channel condition prioritisation may cause issues on fairness, latency requirements, buffer management etc.
As another example for the need of a fair scheduling, two static users of a wireless communications system, A and B have the same pre-assigned QoS, but A is close to a base station while B is far away from it and therefore its transmission encounters longer delay, more noise, greater attenuation etc. If therefore, in order to maximise overall system throughput, user A is always assigned a higher priority over user B, it is not fair to user B, since both users are static and have the same QoS (and customers may pay the same price).