To distribute communication resources between downlink (DL) and uplink (UL) transmission, some wireless communication systems employ a partition of the time dimension resources and assign different time dimension resource elements to uplink and downlink transmission respectively. One example of such an approach is the Time Division Duplex (TDD) operation of the Third Generation Partnership Project (3GPP) standard Universal Mobile Telecommunication Standard—Long Term Evolution (UMTS LTE).
In TDD, transmission in the uplink and in the downlink is typically performed using the same carrier frequency (i.e. uplink and downlink share the same carrier frequency). This is in contrast to Frequency Division Duplex (FDD) operation where different carriers are used for uplink and downlink transmission respectively. Thus, one advantage of TDD compared to FDD is that only one carrier is needed for communication.
Furthermore, TDD is (at least in theory) a flexible approach since allocation of the available time dimension resources (e.g. in terms of subframes of an UMTS LTE system) to uplink and downlink, respectively, may be adapted to a current situation. For example, the allocation of time dimension resources may be adapted based on a current traffic need such that—compared to a default allocation—more time dimension resources are allocated to uplink transmission if there is a need to transmit a higher than normal amount of data in the uplink and vice versa. In UMTS LTE, one example approach to flexibly allocate time dimension resources may be referred to as a dynamically reconfigurable UL/DL allocation based on instantaneous traffic.
Flexible allocation of time dimension resources may, however, result in some difficulties in a practical system implementation. FIG. 1 is a schematic drawing illustrating one such potential problem.
In FIG. 1, a first wireless communication device 101 communicates with a network node 111 of a first cellular communication network and a second wireless communication device 102 communicates with a network node 112 of a second cellular communication network. The first and second cellular communication networks may be the same cellular communication network or different cellular communication networks. If allocation of time dimension resources results in an uplink transmission 121 by the wireless communication device 101 being executed simultaneously as a downlink transmission 122 by the network node 112, there is a risk that interference 141 from the first wireless communication device 101 (caused by the uplink transmission 121) reaches the second wireless communication device 102 and (more or less severely) interferes with reception of the downlink transmission 122 at the second wireless communication device 102. The area 131 marks a “dead zone” or “coverage hole” where downlink reception is severely impaired by an uplink reception from the first wireless communication device 101.
For example, in practical TDD deployments according to UMTS LTE, the possibility to use different UL/DL subframe allocation patterns in different cells is rather limited. This is the case both for different cells using the same carrier frequency and for different cells using adjacent carrier frequencies. The limitation is (at least partly) due to the large dynamics in a communication system, where the power level of a received signal may be as low as approximately −100 dBm while the power level of a transmitted signal may be above e.g. 20 dBm (i.e. a power difference or dynamic range of 120 dB). Therefore, if one device transmits with a high power level at the same time as another device receives a signal with a low power level, the power level of the interference experienced at the receiving device may be (up to) 120 dB lager than the power level of the desired signal considering that the devices may be located close to each other.
Assuming ideal transceivers it would be possible to deploy different UL/DL allocation patterns for different cells using adjacent TDD carriers. However, due to real-world transceiver imperfections (e.g. non-linear elements) there will be leakage of the uplink signal transmitted on one of the carriers into the spectrum of the signal to be received in the other carrier (which could be adjacent). Thus, interference may typically be experienced also in adjacent channels. The UMTS LTE specification includes requirements that adjacent channel leakage should be in the range of 30-40 dB for a User Equipment (UE). Hence, the adjacent channel interference may be (up to) 120−30=90 dB (the in-band power difference of 120 dB is reduced to 80-90 dB adjacent channel power difference if the devices are located close to each other and, thus, the path loss between them is low).
One way of attempting to avoid the situation illustrated in FIG. 1 may be by alignment of the time dimension resource allocation patterns of the base stations, which typically makes the allocation less flexible (or not flexible at all).
Another way to attempt avoidance of the situation illustrated in FIG. 1 may be by having coordination between the network nodes of the first and second cellular communication networks, which may not be possible if the network nodes do not have a suitable connection to each other (e.g. if they belong to different operators). Even if coordination is possible, there may be a lack at the allocating network node of information needed to perform the coordination efficiently (e.g. which devices are in the vicinity of the first wireless communication device 101). In such situations, the flexible allocation may be based on a worst case scenario (e.g. not allocating extra uplink slots to the first device if one or more active devices are present in the entire area covered by an adjacent network node), which typically makes the allocation too restrictive and less flexible.
WO 2009/063001 A2 discloses adaptation of allocation of up-link and down-link subframes in wireless communication systems. A control unit at a base station may detect particular problem scenarios and determine that interference between two (or more) particular mobile terminals has occurred or is likely. In one example, by comparing scheduling information, time alignment values, signal quality reports, and the like, the base station control unit may determine that two half-duplex mobile terminals connected to the serving base station are transmitting similar SIR values and have similar timing alignment such that a transmission for one terminal coincides with reception at another terminal. In this case, a new uplink/downlink subframes allocation pattern is sent to at least one of the mobile terminals. This approach is only possible for terminals connected to the same serving base station and when the control unit has access to the scheduling information, etc. of both the terminals.
Thus, if flexible allocation of time dimension resources results in a situation where uplink transmission is performed by a first wireless communication device in a first cellular communication network and the uplink transmission may cause interference at a second wireless communication device during downlink reception by the second wireless communication device in a second cellular communication network, there is a risk of the flexible allocation of time dimension resources causing problems to the communication in the second cellular communication network.
Such a situation may arise, for example, when the first wireless communication device performs uplink transmission in a time dimension resource that is normally allocated for downlink transmission which results in the allocation patterns of the base station being non-aligned.
An alternative or additional example of the above situation arising is when time dimension resource allocation patterns of the base stations communicating, respectively, with the first and second wireless communication devices are not coordinated. This may be the case, for example, if the first and second cellular communication networks belong to different operators. Then, the interference may occur at the adjacent channel in the worst case.
A yet alternative or additional example of the above situation arising is when there is no information at an allocating network node (e.g. the network node 111 of FIG. 1) regarding distances between a device to be allocated extra uplink resources (e.g. the first wireless communication device 101 of FIG. 1) and other devices that may be interfered by transmission using the extra uplink resources. This may be the case, for example, if the necessary system information cannot be exchanged between the network nodes of the first and second cellular communication networks. Therefore, even if there is some allocation pattern coordination between network nodes, such coordination may be ineffective due to lack of the distance information above.
Therefore, there is a need for methods and devices that enable flexible time dimension resource allocation while managing potential interference caused by the flexibility.