In a high-speed synchronous transmission system, modulated symbols are sent continuously in time. When it is implemented by a computer host processor (so called software modem), it is extremely critical to maintain a continuous bit stream output between the real time transceiver and the non-real time host system (such as a local host computing system running an operating system such as Microsoft Windows.RTM. NT or the like). To reduce transport delay, latencies between each interface located in the data path (i.e., between the upstream transceiver to operating system) must be kept to a minimum.
There are a number of such interfaces in such data path, and various mechanisms have been proposed to ensure a continuous stream of demodulated/modulated data. First, an analog channel interface portion of a downstream ADSL transceiver is responsible for receiving, transmitting, processing analog data signals, and maintaining a synchronized data link through a channel to an upstream transceiver interface. This is a real-time link, and accordingly a modulated signal must be present (at some level, even in reduced power management modes) at all times. An example of a prior art approach for ensuring that the data link is kept synchronized, even when one end is connected to an asynchronous source, is U.S. Pat. No. 5,003,558. This reference teaches sending an "idle" signal to keep the link active even when there is no real data, but, notably, does not address the problem of how to cure latencies which may be present in such asynchronous source.
Next after the channel interface portion in the data path lies a physical layer transport circuit. This portion of the data link is responsible for modulating and demodulating data symbols. To reduce latencies between the channel interface and the physical layer, Receive/Transmit FIFO buffers are used. This technique is well known in the art, and an example of this type of technology is depicted in U.S. Pat. Nos. 4,823,312 and 5,140,679 assigned to National. Accordingly, there are reasonably well developed solutions to the problems of latency and transport delay between a channel interface and physical layer in an ADSL transceiver, and this portion of the data path can be maintained as real time without undue cost and/or complexity.
A larger challenge is posed, however, by the data path located between the physical layer and the logical layer, in this case, an ATM protocol layer. The ATM protocol layer is required to interface both with the physical layer (a real time data path) and with the operating system (an asynchronous data path). To reduce latencies between these two interfaces, one approach would be to use the same kinds of Receive/Transmit FIFO buffers as explained earlier between the channel interface and physical layer. In this respect, on the receive side of the data path, it is possible to use a sufficiently large Receiver buffer to compensate for latencies inherent in the operations of the ADSL physical layer and ATM protocol layer when the latter are implemented in a software modem. Such latencies arise from the fact that, in a software modem context, both such functions are performed by the host processing device executing routines of different priority. For instance, in a Windows operating system, the ADSL physical layer is configured as an Interrupt Service Routine (ISR), while the ATM protocol layer is set up as a lower priority Delayed Procedure Call (DPC). The ADSL Physical layer is configured as a high priority task in the operating system, because, in this manner, latency between this layer and the channel interface is reduced, and the buffers between such interfaces can be reduced as well.
Because a DPC is a lower priority task than an ISR, however, there is an inherent latency between the physical layer and ATM operations, and a Receiver buffer must be employed between the two to handle such disparity. Nevertheless, when a buffer is used, a transport delay is introduced, which, of course, is undesirable. To reduce this delay, the buffer size must be correspondingly reduced. This design goal of course must be tempered by the fact that the buffer must be at least large enough to accommodate expected latencies caused by the difference in priorities of the ADSL Physical Layer and the ATM protocol layer. One proposed solution would be to make the ATM protocol an ISR as well, so that there is no latency between the ADSL Physical Layer and the ATM protocols. This approach is unattractive for the plain fact that ATM protocols require many system calls that cannot be accommodated in an ISR routine, because such routines must be executed in a very short period of time.
Consequently, in most ADSL software modem systems, it is expected there will be a buffer interface between the ADSL physical layer and the ATM protocol. The size of the buffer can be varied, of course, depending on the expected channel data rate, expected operating system latencies, etc. In a typical PC using Microsoft Windows, a latency time of about 10 to 30 msec is contemplated. It is, of course, extremely critical to reduce this latency as much as possible to ensure efficient data transmissions across the entire data link.
One approach suggested in the prior art (such as described in the aforementioned references above) is to include some kind of threshold fill point, or "water mark" for the receive and transmit buffers. In the transmit direction, a transmit buffer water mark is set to some value that is close to the full capacity of the buffer size. When the data in the buffer drops below this mark, an ATM protocol layer routine is activated and more data is loaded (poured in) in the transmit buffer. Accordingly, at any moment in time, an amount of data equal to the transmit buffer water mark level is available to sustain continuous data transmission due to system latency. In the receiving direction, a receive buffer water mark is set to some value close to the empty capacity of the buffer size. When the buffer fills with data above this level, the ATM protocol layer routine is again activated and more data is extracted (poured out) from the receive buffer. Accordingly, at any moment in time, an amount of data equal to the capacity above the receive water mark level is available for the synchronous receiver to store received data due to system latency.
The problem with this approach is the fact that there is often no real data to be transmitted by the operating system to the upstream transmitter, yet the data link on the upstream side must be maintained. As a result, after verifying from the operating system that there are no applications transmitting data, the ATM protocol layer routine must load the buffer with "dummy" data corresponding to a fixed pattern recognizable by the upstream transmitter as such, in a manner similar to that noted in U.S. Pat. No. 5,003,558 discussed above Performing this extra task, however, consumes valuable processing time, and reduces system efficiency. Moreover, by loading the transmit buffer with "dummy" ATM cells, overall data transport latency increases because such data must be flushed in serial fashion through the channel to the upstream transceiver, even when new transmit data is available from the operating system.
Accordingly, the commonly suggested techniques for handling latency in an ADSL software environment while maintaining a synchronous link are impractical, and, in many cases, inefficient.