Optical fibers are known to provide a greater data-carrying capacity, as well as exceptional immunity to crosstalk in comparison to electrical transmission lines, such as twisted-pair copper wires. Fiber to the x (FTTx or FTTX), where x can stand for building, business (FTTB), home (FTTH), neighborhood (FTTN), and the like, is a generalized generic term referring to broadband network communication architecture that employs optical fiber as a substitute, whether completely or in part, to the traditional metal (e.g., typically copper) cabling (e.g., coax, twisted pair) infrastructure used over “last-mile” telecommunications to the end-user. Although, FTTx technology inherently offers faster download and upload speeds in comparison to copper wire technology for the delivery of telephone and broadband services, deployment of fiber in the last few hundred meters accounts for the majority of overall cost as well as it being a time-consuming effort. Communication operators therefore strive to deploy fiber increasingly closer to the end-user, while concurrently bridging the troublesome remaining distance via copper wire technologies over the existing infrastructure. One of such technologies is G.fast (Fast Access to Subscriber Terminals), which enables communication operators to offer fiber-like speeds (e.g., 1 Gbps) to end-users without incurring the expenditures involved in wholly deploying FTTx.
To allow for ever faster data rates on last-mile copper infrastructure the data link layer has to be compatible with the constraints imposed by the physical layer. The physical layer comprises the fundamental networking hardware (i.e., electrical, optical, mechanical, etc.) and various other characteristics (i.e., broadcast frequencies, modulation schemes, etc.) required for data transmission and reception between network entities in a network. Essentially, the physical layer is concerned with conveying raw bits over a communication channel. These raw bits are typically in the form of a bit stream (i.e., a sequence of bits) that are grouped into symbols and that are converted to a physical signal to be transmitted over a transmission medium, such that the physical layer bears this bit stream toward a destination. This stream of raw bits, however, is not necessarily error free when received at the destination.
The data link layer (i.e., also termed ‘link layer’) is involved with the task to provide the means to transfer data between the network entities, to detect errors appearing at the physical layer as well as to possibly correct them. This task is accomplished, in part, by having the sender partition data into discrete frames of specified size which are transmitted sequentially to a receiver. A ‘frame’ is a unit of information (e.g., a data transmission unit (DTU)) that is conveyed across a data link. In general, there are information frames employed for the transfer of message data and control frames employed for link management. A DTU includes a sequence of symbols (i.e., one or more bits) allowing for the receiver to discern the beginning and end in the stream of symbols (i.e., commonly known as ‘frame synchronization’).
Naturally, there is a limited amount of data that can be transmitted or received per frame over a given amount of time; this is generally characterized by what is known as bandwidth. There are various schemes that are employed to allocate bandwidth, among which is time division duplexing (TDD). In TDD, frames are periodically sent from a base station to a receiver. Basically, each frame includes time slots that are allocated and collectively grouped for downstream (downlink) traffic as well those for upstream (uplink) traffic, such that a guard time separates downstream and upstream groups. In essence, TDD allows for a common carrier to be shared among the downlink and uplink, while the resource is switched in time. Prior to transmission of the frames, the transmitter computes a checksum for each frame. The receiver receives the frames and recomputes the checksum so as to determine whether an error has occurred in a particular frame. Error-correcting codes may be employed (i.e., often referred to as forward error correction (FEC)), where there is enough redundant information within each block of data that enables the receiver to determine that an error occurred. The receiver carries the task of either confirming correct receipt of a transmitted frame (e.g., via FEC) by sending back to the transmitter, via a control channel, an acknowledgement (ACK) frame, or sending a negative (NACK) acknowledgment frame, in case the transmitted frame contained errors or was not properly received (the erred frame is discarded). When the transmitter receives a NACK signal from the receiver indicating an ill-received or erred frame, the transmitter is assigned the task of retransmitting (typically immediately) the erred frame to the receiver during the following downlink frame.
Reference is now made to FIG. 1, which is a schematic illustration of an immediate retransmission sequence, which is prior art. FIG. 1 illustrates a prior art downlink retransmission mechanism in time presenting a bit stream partitioned into a plurality of N frames 101, 102, 103, . . . , 10N (where N is an integer) that extend horizontally as a function of time. Without loss of generality, a downlink retransmission mechanism is hereby selected to be described nonetheless a similar schematic illustration for an uplink retransmission mechanism may also be described. Basically, the depicted mechanism employs TDD to facilitate retransmission of erred transmissions that occur in a particular frame (i.e., represented as blocks) in the following frame. Each frame includes its respective downlink zone and uplink zone. Specifically, frame 1 (i.e., 101) includes downlink zone 121 and uplink zone 141, frame 2 (i.e., 102) includes downlink zone 122 and uplink zone 142, frame 3 (i.e., 103) includes downlink zone 123 and uplink zone 143, and so forth to frame N (i.e., 10N) which includes downlink zone 12N and uplink zone 14N).
It is herein assumed that in order to allow for substantially efficient operation the downlink (uplink) control channel that carries the ACK/NACK signal employs a pre-assigned uplink (downlink respectively) bandwidth with the remaining uplink (downlink respectively) used for data transmission. Suppose a transmitter (not shown) transmits a transmission 181 to a receiver (not shown) via a transmission channel 16 at t0 within the allocated downlink zone 121. The receiver receives transmission 181 starting at t2 within the allocated uplink zone 141 and generates an ACK signal 201 thus indicating to the transmitter that transmission 181 was properly received without errors. Let us denote by T3 the time available for the generation of either one of an ACK or NACK (ACK/NACK) signal. Let T2 denote the time required for the generation of ACK/NACK symbols and let T1 denote the time required for the receiver to react to a retransmission request. Suppose now that the transmitter transmits a transmission 182 to the receiver via transmission channel 16 from starting time t4 to end time t5 within the allocated uplink zone 122. The receiver receives transmission 182 at t6 and determines that an error has occurred in the transmission and in response, generates a NACK signal 202 that is relayed back to the transmitter at t8. In response, the transmitter retransmits transmission 182 starting at t9 as a new transmission 183 within downlink zone 123. Now the receiver correctly receives download transmission 183 and generates ACK signal 203, accordingly.
From FIG. 1, it can be seen that as long as the time period for the each uplink zone (e.g., 142) is longer than the sum of T1+T2 the downlink retransmission is immediate (i.e., is sent in the next frame (e.g., frame 3)). If, however, the uplink zone (e.g., 142) is shorter than the sum of T1+T2 the retransmission may be delayed by one or more frames. Given the inherent limited time allocated for the uplink zone, such delays in retransmission to multiple succeeding frames are compounded in time for each respective retransmission thereby imposing high retransmission buffering requirements thus contributing to greater system overhead and latency.