Digital subscriber line (DSL) technology provides high-speed data transfer between modems across ordinary (e.g., twisted copper pair) telephone lines. DSL supports digital data transfer rates from tens of Kbps to tens of Mbps, while still providing for plain old telephone service (POTS). Asynchronous Digital Subscriber Line (ADSL) and Very High Rate Digital Subscriber Line (VDSL) have emerged as popular implementations of DSL systems, where ADSL standards are defined by American National Standard Institute (ANSI) standard T1.413 and International Telecommunication Union (ITU-T) standards G.992.3, G.992.5, and VDSL standards are defined by ANSI standard T1.424 and ITU-T standard G.993.1. ADSL, VDSL and other similar DSL systems (collectively referred to as “xDSL”) typically provide digital data transfer in a frequency range above the POTS band (e.g., about 300 Hz to 4 kHz), for example ADSL G.992.3 operates at frequencies from about 25 kHz to about 1.1 MHz.
Most DSL installations are operated using Discrete Multi Tone (DMT) modulation, in which data is transmitted by a plurality of sub-carriers (tones), sometimes alternatively referred to as subchannels, sub-bands, carriers, or bins, with each individual subcarrier utilizing a predefined portion of a prescribed frequency range. In ADSL, for example, 256 sub-carriers are used, with each sub-carrier having a bandwidth of 4.3125 kHz. The digital data is encoded and modulated at the transmitter using Quadrature Amplitude Modulation (QAM) for each subcarrier and Inverse Discrete Fourier Transform (IDFT) to create the modulated multicarrier signal for transmission along the DSL loop or channel, which is then demodulated at the receiving end and decoded to recover the transmitted data. The bits of data to be transmitted over each subcarrier are encoded as signal points in QAM signal constellations using an encoder or a bit mapping system. Signal points are then modulated onto the corresponding sub-carriers. The combined signals are often referred to as a symbol, e.g., a DMT symbol. The total number of data bits transmitted over the channel is a sum of the bits transmitted by each subcarrier.
In most types of communication systems, it is desirable to maximize the rate at which data is successfully transferred across the communication medium, sometimes referred to as the bit rate or data rate. The maximum data rate, in turn, depends on the noise characteristics of a particular communication channel. In the case of DSL systems, a pair of modems is connected by a twisted pair of wires (sometimes referred to as a loop) that provides the communication medium. In this situation, noise may be generated by signals on neighboring wire pairs (i.e., crosstalk noise) in a distributed telephony system, as well as by outside sources of Radio Frequency Interference (RFI) or other noise. Noise on a particular communication channel may generally be characterized as either continuous noise or impulse noise. Continuous noise can usually be modeled as Additive Gaussian Noise (AGN) with randomly distributed values of noise over time, whereas impulse noise is generally short bursts of relatively high levels of channel noise. Various mechanisms or techniques are employed in DSL and other communication systems to combat continuous and impulse noise and/or to correct noise-related data transfer errors.
Continuous noise is typically addressed by transmitting fewer data bits over sub-carriers with higher continuous noise levels, and more data bits over sub-carriers with lower continuous noise levels. The allocation of data bits to particular sub-carriers may be referred to as bit allocation, bit distribution, or bit loading. The bit distribution parameters may be adapted to changing noise conditions on the channel. The initial bit distribution settings or parameters are selected according to subcarrier noise assessments made during system initialization. DSL systems provide for periodic reassessment of continuous noise conditions and adaptive tuning of the bit distribution parameter settings to accommodate changes. Adaptive tuning may include bit swapping, bit rate adaptation, and bandwidth repartitioning techniques, each of which involve changes to a number of modulation parameters.
Bit swapping does not change the total data rate of the communication channel, but serves to increase or maintain continuous noise immunity by reallocating data bits from noisy sub-carriers to more noise-free sub-carriers. Where the channel noise increases significantly, bit swapping alone may not be adequate to prevent data transmission errors and bit rate adaptation may be employed. Bit rate adaptation involves changing the total number of bits transmitted over all subcarriers. When noise conditions became worse, bit rate adaptation decreases the number of data bits transmitted over some or all sub-carriers. If the continuous channel noise level subsequently decreases, then the number of data bits can be again increased.
Adaptive tuning can effectively address continuous noise conditions, but impulse noise protection requires a different approach. Impulse noise in DSL systems usually causes erasure of an entire signal for a relatively short period of time, regardless of the number of bits allocated to the channel or to particular sub-carriers. Impulse noise can be addressed in DSL and other communication systems by applying forward error correction (FEC) with interleaving (IL).
An FEC encoder generates a certain number of redundancy bytes for each block of transmitted data. The redundancy bytes are added to the data blocks to form FEC codewords. At the receive side, an FEC decoder uses the redundancy bytes to recover (correct) up to a certain number of corrupted data bytes in the block, and thereby ensures that when a small number of bytes in a codeword are corrupted, the original data transmitted in the codeword can be recovered. In general, the number of error bytes that can be corrected by FEC is half of the number of redundancy bytes included in the codeword. Increasing FEC redundancy to provide further FEC protection against noise decreases the data rate, and vice versa, whereby the goals of noise protection and data rate involve a tradeoff.
To combat relatively severe impulse noise, FEC encoders are generally implemented with data interleaving. After the addition of FEC redundancy bytes, an interleaver (at the transmit side) segments the FEC codewords or blocks into smaller portions (segments, usually byte-long) with segments from different codewords being mixed in a certain order prior to modulation. The ordering of segments is such that segments belonging to the same FEC codeword are separated from each other. This results in bytes of the same codeword being spread out over time, whereby impulse noise corruption of the transmitted stream of data during any given short period of time results in corruption of only one or a few bytes belonging to a particular codeword or block, causing fewer errors in each reassembled (e.g., de-interleaved) codeword at the receive side. Thus, interleaving spreads the effects of impulse noise pulses over multiple FEC codewords, whereby the amount of corrupted data in each codeword is less and the effect of occasional long bursts of noise can be reliably corrected with fewer redundancy bytes per codeword.
An implementation issue arises when FEC and interleaving are combined with bit rate adaptation. Bit rate adaptation is used to maximize available bandwidth while controlling noise, as discussed previously. In addition, rate adaptation is particularly desirable in DSL systems, especially VDSL systems, where customers usually are not using simultaneously all the services they subscribe to and do not require the full available band width most of the time. For example, video service usual provides for the use of two or three independent TV sets simultaneously with data and voice services. In practice, most subscribers are using no more than one of these services at a time.
High bandwidth requires high power consumption and induces high levels of cross-talk between adjacent lines. Reducing the bandwidth reduces heating in cabinets, reduces cross-talk, and improves network reliability. Therefore, it is highly desirable to reset the bandwidth and the transmit power in response to varying customer demands as well as in response to variation in continuous line noise.
Preferably, rate adaptation should be preformed seamlessly, meaning without interruption or reduction in quality for services that are used continuously through the period of bit-rate change. For example, if a user switches off a TV set, the power and bandwidth are preferably reduced without causing distortions, changes, or degradation in other services (such as transmissions to other TV sets or computer games) that remain in use.
Seamless rate adaptation (SRA) techniques for multi-carrier transmissions are well known. For example, international standards for ADSL based on DMT specify SRA procedures that change the total bit-rate by changing the number of bits on each sub-carrier. The total power can then be reduced by reducing the power for sub-carriers with reduced bit loading or by temporarily switching off the power for sub-carriers with zero bit loading. Special synchronization flags are provided to ensure the desired changes in bit-loading and power level are applied for exactly the same DMT symbols at both sides of the line.
Conventional SRA techniques, however, do not specify SRA over communication paths that include interleavers. The reason is that interleavers operate on blocks of data, whereby an output block cannot be generated until several input blocks have been received. The time required to receive the input blocks causes a transmission delay, which increases when the data rate slows. For example, if k blocks are involved and the data rate is halved, the time required to receive k blocks will be the same as the time required to receive 2 k blocks at the original data rate. Accordingly, the delay will be approximately doubled. Delay variations caused by bit rate variation affects some services, making rate adaptation not seamless. Furthermore, the delays can increase to unacceptable levels. In fact, unacceptable delay levels would occur routinely because the interleaver delay is usually at the edge of the allowed limit, with impulse noise protection being maximized subject to this limit.
Another issue is that the degree of impulse noise protection changes with SRA. When the bit rate for the line increases, more bits are carried in each unit of time. Thus, more bits will be corrupted by the same impulse noise event (which usually corrupts the transmission for a certain period of time). As a result, an impulse noise event that would not cause uncorrectable errors before SRA, will cause uncorrectable errors after SRA. The degree of impulse noise protection (INP) is reduced while the depth of interleaving remains constant.
In principle, the interleaver depth could be adjusted to mitigate variations in delay and in INP. In practice, changing the interleaver depth is a complex task in that changing the interleaver depth requires dynamic re-allocation of the interleaver memory, which must be carried out without corrupting the bytes of data already in the memory. This complexity is one of the reasons that current international standards for ADSL do not specify SRA over channels that include interleavers. Accordingly, there has been a long felt need for compatible SRA and impulse noise protection methods for DSL and other communication systems.