Modern wireless communication networks are powerful systems that are able to convey a large amount of data to individual radio devices using distributed radio transmissions from radio access network nodes. The data throughput of such wireless communication systems has drastically increased over the years, so that not only voice but also video and other large data files can be exchanged from and to individual radio devices. The data is usually exchanged in the form of so-called data packets (or simply packet) which can be identified as some sort of chunk of data carrying respective control information that allows the network infrastructure to route a packet from a source to a given destination device.
Packet latency is one of the performance metrics that vendors, operators and also end-users (e.g. via speed test applications) regularly measure. The measured latency usually indicates some kind of time figure that in turn indicates a time or delay that a packet requires to arrive at a given destination. In other words, the lower the latency the faster the network performance may be perceived. Latency measurements can be generally performed in all phases of a radio access network system's lifetime, e.g. when verifying a new software release or system component, when deploying a system or when the system is in operation. For example, a shorter latency than previous generations of 3GPP (3rd Generation Partnership Project) implementations was one performance metric that guided the design of the so-called Long Term Evolution (LTE) technology. As compared to previous systems, LTE is generally recognized by end-users to be a system that provides faster access to internet and lower data latencies than previous generations of mobile radio technologies.
However, packet data latency plays not only a role for the perceived responsiveness of the system, but it can be also a parameter that indirectly influences the throughput of the system. Conventionally, HTTP/TCP is the dominating application and transport layer protocol suite used on the internet. According to HTTP Archive (accessible for example via “http://httparchive.org/trends.php”) the typical size of HTTP based transactions over the internet are in the range of a few 10's of Kbyte up to 1 Mbyte. In this size range, the TCP slow start period may be a significant part of the total transport period of the packet stream. In other words, the smaller the amount of data of a transaction is, the more pronounced is the influence of the involved control signaling on the overall perceived “speed” of a network. During the mentioned TCP slow start the performance can be identified as being mainly limited by latency. Therefore, it can be shown that improving the latency can improve the average throughput for such types of TCP based data transactions.
However, the isolated reduction of control signaling in data packet communication may not provide a reliable solution, since an improvement and speed, i.e. a lowering of the involved latency, may result in an unreliable exchange of control data between the involved parties, i.e. a radio device and a corresponding radio access network node, since, for example, redundancy and other error detection and correction mechanisms may suffer or may even become dysfunctional when the amount of control data is reduced. More specifically, a shortened control channel for the radio device may no longer work reliably when the device gets out of coverage in a power limited region. This is because the robustness of the transmission in this case relates to the number of available symbols.
There is therefore a need for improved system of radio devices and radio access network nodes that are able to both reduce the latency and at the same time maintain reliable control information exchange.