The present invention relates to methods, systems and apparatus for transmitting data in mobile telecommunications systems.
Third and fourth generation mobile telecommunications systems, such as those based on the 3GPP defined UMTS and Long Term Evolution (LTE) architecture are able to support more sophisticated services than simple voice and messaging services offered by previous generations of mobile telecommunications systems.
For example, with the improved radio interface and enhanced data rates provided by LTE systems, a user is able to enjoy high data rate applications such as mobile video streaming and mobile video conferencing that would previously only have been available via a fixed line data connection. The demand to deploy third and fourth generation networks is therefore strong and the coverage area of these networks, i.e. geographic locations where access to the networks is possible, is expected to increase rapidly.
FIG. 1 provides a schematic diagram illustrating some basic functionality of a conventional mobile telecommunications network/system operating in accordance with LTE principles and which may be modified to implement embodiments of the invention as described further below. The various elements of FIG. 1 and their respective modes of operation are well-known and defined in the relevant standards administered by the 3GPP® body and also described in many books on the subject, for example, Holma H. and Toskala A [1].
The network includes a plurality of base stations 101 connected to a core network 102. Each base station provides a coverage area 103 (i.e. a cell) within which data can be communicated to and from terminal devices 104. Data is transmitted from base stations 101 to terminal devices 104 within their respective coverage areas 103 via a radio downlink. Data is transmitted from terminal devices 104 to the base stations 101 via a radio uplink. The core network 102 routes data to and from the terminal devices 104 via the respective base stations 101 and provides functions such as authentication, mobility management, charging and so on. Terminal devices may also be referred to as mobile stations, user equipment (UE), user terminal, mobile radio, and so forth. Base stations may also be referred to as transceiver stations/nodeBs/e-nodeBs, and so forth.
Mobile telecommunications systems such as those arranged in accordance with the 3GPP defined Long Term Evolution (LTE) architecture use an orthogonal frequency division modulation (OFDM) based interface for the radio downlink (so-called OFDMA) and a single-carrier frequency division multiple access based interface for the radio uplink (so-called SC-FDMA). FIG. 2 shows a schematic diagram illustrating an OFDM-based LTE downlink radio frame 201. The LTE downlink radio frame is transmitted from an LTE base station (known as an enhanced Node B) and lasts 10 ms. The downlink radio frame comprises ten subframes, each subframe lasting 1 ms. A primary synchronisation signal (PSS) and a secondary synchronisation signal (SSS) are transmitted in the first and sixth subframes of the LTE frame. A physical broadcast channel (PBCH) is transmitted in the first subframe of the LTE frame. The PSS, SSS and PBCH are used, for example, during camp-on procedures.
FIG. 3 is a schematic diagram of a grid which illustrates the structure of an example conventional downlink LTE subframe. The subframe comprises a predetermined number of symbols which are transmitted over a 1 ms period. Each symbol comprises a predetermined number of orthogonal sub-carriers distributed across the bandwidth of the downlink radio carrier.
The example subframe shown in FIG. 3 comprises 14 symbols and 1200 sub-carriers spread across a 20 MHz bandwidth and is the first subframe in a frame (hence it contains PBCH). The smallest allocation of user data for transmission in LTE is a resource block comprising twelve sub-carriers transmitted over one subframe). For clarity, in FIG. 3, each individual resource element is not shown, instead each individual box in the subframe grid corresponds to twelve sub-carriers transmitted on one symbol.
FIG. 3 shows in hatching resource allocations for four LTE terminals 340, 341, 342, 343. For example, the resource allocation 342 for a first LTE terminal (UE1) extends over five blocks of twelve sub-carriers (i.e. 60 sub-carriers), the resource allocation 343 for a second LTE terminal (UE2) extends over six blocks of twelve sub-carriers and so on.
Physical layer control information is transmitted in a control region 300 (indicated by dotted-shading in FIG. 3) of the subframe comprising the first n symbols of the subframe where n can vary between one and three symbols for channel bandwidths of 3 MHz or greater and where n can vary between two and four symbols for a channel bandwidth of 1.4 MHz. For the sake of providing a concrete example, the following description relates to host carriers with a channel bandwidth of 3 MHz or greater so the maximum value of n will be 3. The data transmitted in the control region 300 includes data transmitted on the physical downlink control channel (PDCCH), the physical control format indicator channel (PCFICH) and the physical HARQ indicator channel (PHICH).
PDCCH contains control data indicating which sub-carriers of the subframe have been allocated to specific LTE terminals. Thus, the PDCCH data transmitted in the control region 300 of the subframe shown in FIG. 3 would indicate that UE1 has been allocated the block of resources identified by reference numeral 342, that UE2 has been allocated the block of resources identified by reference numeral 343, and so on.
PCFICH contains control data indicating the size of the control region (i.e. between one and three symbols).
PHICH contains HARQ (Hybrid Automatic Request) data indicating whether or not previously transmitted uplink data has been successfully received by the network.
Symbols in a central band 310 of the time-frequency resource grid are used for the transmission of information including the primary synchronisation signal (PSS), the secondary synchronisation signal (SSS) and the physical broadcast channel (PBCH). This central band 310 is typically 72 sub-carriers wide (corresponding to a transmission bandwidth of 1.08 MHz). The PSS and SSS are synchronisation signals that once detected allow an LTE terminal device to achieve frame synchronisation and determine the physical layer cell identity of the enhanced Node B transmitting the downlink signal. The PBCH carries information about the cell, comprising a master information block (MIB) that includes parameters that LTE terminals use to properly access the cell. Data transmitted to individual LTE terminals on the physical downlink shared channel (PDSCH) can be transmitted in other resource elements of the subframe.
FIG. 3 also shows a region of PDSCH containing system information and extending over a bandwidth of R344. A conventional LTE subframe will also include reference signals which are not shown in FIG. 3 in the interests of clarity.
The number of sub-carriers in an LTE channel can vary depending on the configuration of the transmission network. Typically this variation is from 72 sub carriers contained within a 1.4 MHz channel bandwidth to 1200 sub-carriers contained within a 20 MHz channel bandwidth (as schematically shown in FIG. 3). As is known in the art, data transmitted on the PDCCH, PCFICH and PHICH is typically distributed on the sub-carriers across the entire bandwidth of the subframe to provide for frequency diversity.
Whereas FIGS. 2 and 3 relate to the downlink frame structure in a conventional LTE telecommunications system, a broadly similar frame structure is employed for the uplink in terms of how the available time and frequency resources are divided into time and frequency elements which are allocated to different channels, such as the PUCCH (physical uplink control channel) and PUSCH (physical uplink shared channel).
There are a number of different operating modes for telecommunications systems which derive from the two-way nature of communications between a base station and a terminal device. In particular, telecommunications systems may operate in a Time Division Duplex (TDD) mode or a Frequency Division Duplex (FDD) mode, and furthermore communications between a base station and a terminal device may be half-duplex or full-duplex.
A half-duplex mode of operation is one in which communications from the base station to the terminal device (downlink communications) and communications from the terminal device to the base station (uplink communications) are not made simultaneously. That is to say, the terminal device does not transmit and receive at the same time. The base station also does not simultaneously transmit and receive with respect to a given terminal device (although in principle a base station supporting half-duplex communications with individual terminal devices may transmit to one terminal device while simultaneously receiving from another terminal device).
A full-duplex mode of operation is one in which downlink and uplink communications associated with a particular terminal device may be made simultaneously. That is to say, the terminal device and base station are able to transmit to and receive from one another at the same time.
A TDD mode of operation is one in which downlink and uplink communications are made at different times using the same frequencies. A TDD mode of operation is thus a half-duplex mode.
An FDD mode of operation is one in which downlink and uplink communications are made using different frequencies. An FDD mode of operation may be half-duplex or full-duplex.
Various advantages and disadvantages associated with each of these different potential modes of operation are well known.
FIG. 4 schematically represents a particular issue that arises with half-duplex communications which can give rise to wasted transmission resources. FIG. 4 schematically represents two subframes in a telecommunications system supporting half-duplex communications between a base station and a terminal device. In this example it is assumed the telecommunications system is an LTE-compliant system, for example as shown in FIGS. 1 to 3. In FIG. 4, time extends from left to right and it is assumed a downlink subframe occurs between times T1 and T2 and an uplink subframe occurs between times T2 and T3. The times T1, T2 and T3 are the times of subframe boundaries according to the base station clock.
As schematically shown in FIG. 4, the downlink subframe comprises 14 symbols (the operating bandwidth is not significant here). The downlink subframe is represented twice in FIG. 4. The upper representation is marked BS:DL (base station downlink) and represents the downlink subframe as transmitted by the base station. This is properly registered with the subframe boundaries at T1 and T2. The lower representation of the downlink subframe is marked UE:DL (user equipment downlink) and represents the downlink subframe as received by the terminal device (user equipment). The downlink subframe as received by the terminal device is not registered properly with the subframe boundaries at T1 and T2 according to the base station clock. This is because of a propagation delay Δp corresponding to the time taken for the radio signals to reach the terminal device from the base station.
As schematically shown in FIG. 4, the uplink subframe also comprises 14 symbols (the operating bandwidth is again not significant here). The uplink subframe is also represented twice in FIG. 4. The lower representation is marked BS:UL (base station uplink) and represents the uplink subframe as received by the base station. In accordance with standard techniques, the telecommunications system is configured to operate such that the uplink subframe as received by the base station (BS:UL) is properly registered with the subframe boundaries at T2 and T3. Thus, so far as the base station is concerned, reception of the uplink subframe starts as soon as transmission of the downlink subframe is complete. To achieve this, it is necessary for the terminal device to begin transmission of the uplink subframe before T2 to allow for the uplink propagation delay. This is called timing advance. Thus the upper representation of the uplink subframe in FIG. 4 is marked UE:UL (user equipment uplink) and represents the uplink subframe as transmitted by the terminal device (user equipment). For the beginning of the uplink subframe to arrive at the base station at time T2, transmission by the terminal device starts at a time T2-Δp (it is assumed here the uplink propagation delay is the same as the downlink propagation delay).
As can be seen in FIG. 4, the downlink and uplink propagation delays mean that the end of the downlink subframe as seen by the terminal device is after the beginning of the uplink subframe as transmitted by the terminal device. Thus the terminal device sees an overlap of twice the propagation delay between the end of the downlink subframe (UE:DL) and the beginning of the uplink subframe (UE:UL). In a half-duplex mode of operation the terminal device cannot transmit and receive at the same time, and so the terminal device cannot receive during the overlap period when it has started transmitting the uplink subframe. What is more, it is generally not possible for a terminal device to switch instantaneously from reception to transmission. Because of this there will be a period of time between reception and transmission during which data cannot be received or sent. This switching period (Δs) is schematically shown by a black region 400 at the beginning of the representation of the uplink subframe as seen by the terminal device in FIG. 4. (It should be noted the various time periods in FIG. 4 are not necessarily shown to scale.)
The net result of the downlink propagation delay, the need for timing advance in the uplink, and the switching delay is a combined period Δt (=2Δp+Δs) during which a terminal device operating in half-duplex mode is unable to receive data at the end of a downlink subframe. This period is schematically represented in FIG. 4 by grey shading towards the end of the downlink subframe representations (BS:DL, UE:DL). To take account of this issue it is known for terminal devices to in effect puncture downlink subframes to introduce idle symbols during which no data is received by the terminal device. The number of idle symbols will depend on the magnitude of the switching and propagation delays. Typically, there will be one or two idle symbols. In the example shown in FIG. 4, two idle symbols are required, and these are schematically shown as containing a cross. Even for terminal devices having low switching times and which are relatively close to the base station and (hence subject to relatively short propagation delays), there will be at least one idle symbol. This represents a loss of around 7% of the available physical transmission resources.
FIG. 4 shows the introduction of idle symbols at the end of the downlink subframe in accordance with established techniques. It will be appreciated however that idle symbols might instead be introduced at the beginning of an uplink subframe to allow for the inability of a terminal device to transmit and receive at the same time.
One way to avoid the need for idle symbols would be to restrict the scheduling of uplink and downlink subframes for terminal devices operating in half-duplex mode to ensure a particular terminal device was never scheduled for uplink in a subframe immediately following one in which the terminal device was scheduled for downlink. However, this reduces the maximum data rate that can be sustained for a given terminal device, and furthermore introduces complexity into the scheduling procedures resulting in reduced scheduling flexibility.
Accordingly, there is a need for improved techniques for addressing the above-identified issues with half-duplex operation in telecommunications systems.