LTE (Long Term Evolution) is the next step in cellular Third-Generation (3G) systems, which represents basically an evolution of previous mobile communications standards such as Universal Mobile Telecommunication System (UMTS) and Global System for Mobile Communications (GSM). It is a Third Generation Partnership Project (3GPP) standard that provides throughputs up to 50 Mbps in uplink and up to 100 Mbps in downlink. It uses scalable bandwidth from 1.4 to 20 MHz in order to suit the needs of network operators that have different bandwidth allocations. LTE is also expected to improve spectral efficiency in networks, allowing carriers to provide more data and voice services over a given bandwidth. In order to do that, LTE uses Orthogonal Frequency-Division Multiple Access (OFDMA) which is a proven access technique, based on Orthogonal Frequency-Division Multiplexing (OFDM), for efficient user and data multiplexing in the frequency domain. Other wireless standards like WiFi (IEEE 802.11) or WiMAX (IEEE 802.16) also employ OFDM techniques.
One of the advantages of OFDM is its ability to resolve the frequency components of the received signal. Frequency resolution allows the receiver to determine the received signal to interference and noise ratio (SINR) corresponding to the different frequencies of interest or subcarriers. This set of SINR values is exploited by the receiver to derive the most suitable modulation and coding format to use when link adaptation is employed in the system. The receiver can obtain such modulation and coding format, and report it to the transmitter in order to optimize the transmissions for the most suitable operating point.
On the other hand, error detection and correction in received blocks (for example in the received packets) are long-standing techniques that have achieved stunning progress in the last decades. Error detection codes have the ability to detect that an error has occurred in a packet with high reliability, at the cost of some overhead usually appended at the end of the packet. Forward Error Correction (FEC) codes can sometimes detect errors but more importantly they are able to correct them to a certain extent. When combined together, error correction and detection represent a critical part of any communications system, especially wireless communications which are prone to severe impairments from the channels.
However, error detection and correction involve significant penalty in terms of packet overhead, especially when the size of the packets is very small. As an example, typical error correction codes like convolutional coding or turbo coding of rate ⅓ introduce an overhead in terms of parity bits which is more than double of the original size of the information. Parity bits can be selectively pruned at a rate matching stage, but this in turn reduces the effectiveness of the coding scheme. Error detection codes like Cyclic Redundancy Check (CRC) require a number of appended bits that may represent a significant fraction of the information packet length (typical CRC lengths are 8, 16, 24 or 32 bits). Hence, if the packet length is significantly reduced, the overhead caused by the detection codes might be unacceptable. For this reason, applications with very small packet sizes (like some machine-type applications) may disregard the use of error detection codes (because of the packet overhead due to the appended bits), thereby leaving the receiver without the possibility to trigger any actions in response to a packet error.
Detecting (or predicting) errors in a received block can be very advantageous for a number of reasons. In uncoded systems, like some machine-type applications without any kind of error detection or correction codes, if packets with errors are detected (or predicted), a prompt response can be triggered from the network such as a request for retransmission or a similar action without having to wait for the application layer to react to a missing packet. In all cases early detection of an erroneous packet can help reducing the overall latency of the system, but the use of error detection codes incur large overheads which are unacceptable when applied to very small packets.
Therefore, some applications (as machine-type applications) with very small packet sizes can be severely degraded by the introduction of traditional error correction/detection codes but, at the same time, if errors are not detected the overall latency of the system is degraded. Introducing error correction codes is in general justified because of the savings that they can bring in signal to noise ratio, but error detection codes are hardly justified when significant part of the payload must be devoted to CRC or parity bits.
There are partial solutions to this problem involving simple retransmissions of the same information. However this results in severe penalty in terms of spectral efficiency and delay, without bringing any means to actually check whether the received block is correct or not. Other solutions simply add a suitable CRC field as in Long Term Evolution (LTE) but, as previously explained, this impairs the spectral efficiency and is not really suitable with very small packet sizes.
A different, although related, set of solutions deal with block error rate (BLER) prediction, which is quite different to block error prediction. Block error rate prediction deals with obtaining suitable average block error rates for a given received channel, which is usually characterized in the form of a signal to interference and noise ratio (SINR) or a set of SINR indications (e.g. one for each of the different frequencies of interest). These techniques yield a BLER estimate that may be useful for link adaptation or system level simulations, but are unable to actually predict whether a received packet has errors or not: all they can do is estimating the average block error rate as the long-term average of the actual observed block error probability. Examples of BLER prediction techniques are so-called Link to System techniques, like Mutual Information Effective SINR Mapping (MIESM) or Exponential Effective SINR Mapping (EESM), among others.
Hence, there is a need in the state of the art for more adequate solutions in order to estimate whether an error has occurred in a given received packet with sufficient reliability, without incurring in large overheads which involve significant penalty in terms of spectral efficiency, especially when the size of the packets is very small.