The rapid advancement in cellular networks and mobile devices has led to major improvements in the services provided to cellular networks users. This has resulted in the rate of adoption of mobile devices growing exponentially. Due to the nature of newly provided services such as web access, and to the increased number of users, the demand for higher data rates has also increased exponentially. Providing such high data rates for users has become one of the main challenges for cellular service providers.
At present, the evolution of most 4G wireless networks, such as Long Term Evolution (LTE) and LTE-Advanced (LTE-A), is being driven by this demand for higher capacity and peak throughput. One of the challenges in providing high-speed data in high capacity mobile networks is the prevalence of low data rate cell-edge users which tend to be interference limited, as well as coverage gaps for indoor users.
The scarcity of the radio spectrum is a major reason for the inability to provide higher data rates. As most of the licensed frequency bands are allocated, it is very difficult to allocate sufficient radio resources to the increasing number of users. As such, there is always a need to come up with new approaches to utilize the radio spectrum in more efficient ways.
The demand of higher data transmission rates, reliable connection and uniform quality of service across the cell area in mobile communication systems continues to increase, for instance the growth in mobile/cellular data traffic between quarter one 2013 and quarter one 2014 is reported to be about 65 percent. In order to meet this challenge, a reuse of radio resources in every cell is needed. Nevertheless, these frequency reuse systems experience Inter-Cell Interference (ICI) that limits user throughput, particularly affecting cell edge users.
Coordinated Multi-Point (CoMP) transmission/reception, also known as a Multipoint Cooperative Communication (MCC) technology, is an effective technique to improve the network performance by boosting the throughput for cell-edge users. CoMP can be defined as a method in which participating basestations (BSs) coordinate the handling of interference and scheduling. In CoMP-enabled systems, basestations are grouped into cooperating clusters or sets, each of which contains a subset of the network basestations. The basestations of each cluster exchange information and jointly process signals by forming virtual antenna arrays distributed in space. Furthermore, multiple User Equipment or UEs can also simultaneously receive their signals from one or multiple transmission points in a coordinated or joint-processing manner. Generally, this technique is an effective way of managing ICI. For ICI management, UEs need to measure and report so-called channel-state information (CSI) to the network so that the scheduler can perform adaptive transmissions and appropriate Radio Resource Management (RRM) on that basis. However, CSI reporting generally increases the radio signaling and infrastructure overhead as well as the latency in the network which is well-known to decrease the network throughput. The nature and amount of overhead largely depends on the architecture of the CoMP scheme used.
There are two broad categories of known CoMP architectures, namely centralized and distributed, each typically using a different process to handle CSI feedback. FIG. 1A shows an example of a conventional centralized CoMP architecture 10 in which a Central Unit (CU) 12 uses CSI feedback to perform scheduling decisions for basestations (BSs) 20, 22, 24 which form a set of CoMP cooperating nodes for participating UEs e.g. UEs 30, 32, 34 in cells 14, 16, 18. In this example, each participating UE 30, 32, 34 estimates the CSI associated with each of the basestations 20, 22, 24 in the CoMP set and sends the CSI information estimated to its respective serving basestation 20, 22, 24. The basestations 20, 22, 24 in turn forward the (local) CSI reports received to the CU 12. Finally, the CU calculates the (global) CSI, and based on that, makes scheduling decisions for participating UEs 32, 32, 34 which are then communicated to the basestations 20, 22, 24. Unfortunately, this centralized framework suffers from signaling overhead and infrastructure overhead as well as increased network latency.
FIG. 1B shows an example of a conventional distributed CoMP architecture 50 in which the coordinated basestations 20, 22, 24 exchange the CSI received locally over a fully meshed signaling network of interfaces connecting the basestations 20, 22, 24 (e.g. X2 interfaces). As with the example of FIG. 1A, the participating UEs 30, 32, 34 estimate the CSI related to each particular basestation 20, 22, 24 in the CoMP set and send the information back to their respective serving basestation 20, 22, 24 so that it can be distributed to the other cooperating basestations 20, 22, 24 in the CoMP set. Based on the CSI information received locally and from the other cooperating basestations 20, 22, 24, the basestations 20, 22, 24 schedule their resources independently. As it can be seen, the decentralized architecture of FIG. 1B requires the sharing of local CSI feedback among participating basestations and as such, increases the feedback overhead present on the X2 interface. This in turn also has a negative impact on the latency of the network. Also, the architecture is more sensitive to error patterns on the X2 links between the eNBs, since error patterns could be different for the different X2 links between basestations. This could be a potential cause for further performance degradation compared to a centralized architecture.
Two major challenges of the above architectures are latency and overhead, which are the main barriers to achieving efficient CoMP communications. Generally, latency is inversely related to the throughput of the network. In coordinated schemes such as CoMP, if the latency of the network is greater than the CSI feedback periodicity, the scheduler may receive backdated (i.e. stale) CSI. This in turn can adversely affect throughput. Table 1 illustrates an example of how latency may affect the throughput of a network.
TABLE 1Delay5 ms1 ms200 μsThroughput loss20%5%1%
According to the above information in Table 1, by reducing the latency by only 1 ms, the throughput of a network can generally be improved by 5%. Low latency to achieve better throughput is not only important to maintaining the quality user experience of services such as social, machine-to-machine and real-time services, but latency reduction is also important to meet the ever increasing capacity expectations for future wireless network architectures currently being developed and for which latency budgets continue to shrink. Therefore, it is desirable to reduce the latency associated with or caused by the need for CSI reporting in order to improve network throughput and efficiency.