In the field of data communications, quantities of data known as "frames" are often transmitted from one node (station) to another through a network of nodes that operate using their own independent clocks. Use of independent clocks in the nodes requires a system for ensuring that data corruption will not occur when frames are transmitted from a source node to a destination node through a number of repeater nodes. One method commonly employed for preventing data corruption in such networks is the use of an elasticity buffer at each node.
An elasticity buffer is a first-in first-out storage device including a number of single or multibit storage elements. In the elasticity buffer, data enters and exits at different rates corresponding to the different frequency of the clock used in the upstream transmitting node compared with the local clock frequency used in the receiving node. Elasticity buffers are required even though data transfer rates are nominally the same, because independent clocks in separate nodes will differ in frequency within some known tolerance.
An elasticity buffer is used when the incoming data rate may vary from the outgoing data rate. The independent clocks in different nodes are asynchronous, and at some point, data from an upstream node (the sending station) must be synchronized to the local clock in the repeater node receiving the data. Typically, incoming data is received synchronously with a transmit clock signal of the upstream node. The transmit clock signal may be sent to the repeater node on a dedicated clock line, or the transmit clock signal may be recovered by the repeater node from incoming data using a clock recovery device.
After data is received by the repeater node, some implementations provide that the data is first synchronized to the local clock and then written into the storage elements of the elasticity buffer. In these systems, data is written into and read from the storage elements using the same local clock, and the buffer read/write operations are therefore synchronous.
In an elasticity buffer with synchronous read/write operations, each data unit (e.g., a byte) transferred through the elasticity buffer must first be pre-synchronized when the data unit is received by the repeater node. For synchronous elasticity buffers, however, metastability is a significant problem and may result in data corruption.
Metastability occurs when adequate setup and hold times for input data are not provided for a logic element (e.g., a flip-flop) that is latching in the data. In logic elements used for synchronization, the variation in clock speeds makes the time of arrival of each data unit uncertain, thereby causing errors when input data is sampled during a period of instability. Although failure rates may be reduced using sophisticated designs and multistage synchronizers, there is a probability of data corruption due to metastability each and every time logic elements performing the synchronization sample the signal levels on input data lines. Problems with metastability become more critical in nodes in which clock speeds are designed to approach the limits of existing device technology, because the probability of error increases as the speed at which the logic elements sample the input data lines increases. In order to reduce the probability of data corruption, it is therefore desirable to minimize the frequency at which samples of input data are taken by logic elements performing the synchronization function.
Data corruption resulting from metastability is reduced when an asynchronous elasticity buffer is used in the repeater node instead of a synchronous buffer. In the asynchronous elasticity buffer, data is written into storage elements in synchronism with the transmit clock signal, and data is read from the storage elements in synchronism with the local clock signal. Thus, the read and write operations for this elasticity buffer are totally asynchronous. As a result, there is no need to provide for sampling of each and every input data unit by a synchronizer logic element before it is written into a storage element. In repeater nodes utilizing asynchronous elasticity buffers, input data can be synchronized infrequently, e.g., once during each period when an entire frame of data is transmitted to the repeater node. Typically, input data is synchronized by the repeater node at the start of receipt of each frame of data from the upstream node.
Data is stored in an elasticity buffer as it arrives from an upstream node, and is read from the buffer for transmission to a downstream node at a rate determined by the local clock in the node. If the local clock for the repeater node is slower than the transmit clock of the upstream node, the buffer will become more and more full as the data in a frame is transmitted through the node. If the local clock is faster than the transmit clock of the upstream node, the buffer gradually empties of all data.
The elasticity buffer in a repeater node between a source node and a destination node must therefore include enough storage elements to ensure it will not become full before the last data unit in a frame has been transmitted to a downstream node. If the buffer fills before the repeater node has transmitted the last data unit to the downstream node, the buffer cannot store additional data being transmitted from an upstream node without corrupting previously received data that has not yet been transmitted to the downstream node. When data is written into a storage element that has previously been written but has not yet been read, a write overflow condition exists.
An elasticity buffer in a repeater node between a source node and a destination node must also prevent a storage element from being simultaneously written and read, or from being read before it is written. A read underrun condition exists when data is read from a storage element that has previously been read but has not yet been written. However, data corruption actually occurs before the read underrun whenever a storage element is read too soon after data is written into the storage element. Valid data cannot be read from a storage element at the same instant data is stored into the storage element. This is due to the fact that logic elements in an elasticity buffer, including the storage elements, have propagation delays and setup and hold times.
Therefore, to minimize the probability of data corruption due to a read underrun condition, a minimum delay is provided before reading of a storage element in which the first data unit in a frame has been written. Without such initialization (also referred to as resetting or recentering) of the elasticity buffer, a repeater node with a relatively fast clock empties its elasticity buffer and attempts to transmit data to the downstream node before the data has been received from the upstream node. Typically, the elasticity buffer is initialized at least once during transmission of every frame, usually at the start of receipt of each frame of data from the upstream node and/or after detecting that a write overflow or read underrun is impending.
In order to prevent any unacknowledged data corruption due to write overflow or read underrun conditions, the repeater node detect whether write overflow and/or read underrun conditions are impending. Each storage element in the elasticity buffer has a unique address. Therefore, detection of overflows/underruns can be accomplished by monitoring the write address and the read address. When the write address and read address pass each other in either direction, an overflow/underrun condition has occurred.
In nodes utilizing synchronous elasticity buffers, detecting when the write address and read address pass each other is easily accomplished because read/write operations are synchronous. In contrast, detecting an impending overflow/underrun condition is much more difficult using asynchronous elasticity buffers because the selection of the write address is not synchronous with the selection of the read address.
However, in choosing between synchronous and asynchronous elasticity buffers, it is important to recognize that overflow/underrun conditions do not occur in normal circumstances. Therefore, it is better to minimize the probability of data corruption due to metastability, which occurs even under normal conditions. For this reason, it is often preferable to use an asynchronous elasticity buffer, which will generate fewer data errors, provided a method of overflow/underrun detection can be used that is effective and efficient.
Another decision involved in design of an elasticity buffer is whether to use a serial elasticity buffer or a parallel elasticity buffer. Although data is transferred between nodes in the network in serial format, it is often necessary to design the elasticity buffer to receive and transmit data in parallel format because available technology cannot operate at the higher speeds required for serial data transfer.
In some implementations of serial asynchronous elasticity buffers, overflow/underrun detection requires storing a flag for each memory location. The flag indicates whether a read or a write operation was the most recently performed operation at that location. Potential overflows/underruns are detected whenever a read attempt is made to a location at a time when the flag for the location that is next to be read indicates a read was most recently performed, and whenever a write attempt is made to a location prior to completion of a read on the next location. Thus, overflow/underrun detection occurs whenever the read address and the write address point to storage elements that are within one location of each other.
In other implementations of serial asynchronous elasticity buffers, overflow/underrun detection requires monitoring of write select and read select lines corresponding to each data register in an elasticity buffer. An error signal is asserted whenever the write select and read select lines corresponding to two contiguous data registers are enabled. Thus, overflow/underrun detection occurs whenever the read pointer and the write pointer select data registers that are within one location of each other.
The designs used for the overflow/underrun detection in serial asynchronous elasticity buffers do not necessarily, however, provide for efficient overflow and/or underrun detection in parallel asynchronous elasticity buffers. These designs require a relatively large elasticity buffer, much of which is unused, if applied to parallel elasticity buffers.
In the serial designs, overflow/underrun detection occurs if adjacent locations are selected from reading and writing at any time. In parallel buffers, a multibit clock signal has a period corresponding to the time between transfer of each data unit. Thus, the designs described above would detect overflow/underrun for a parallel elasticity buffer even though pointers for the write and read addresses are between one and two multibit clock signal periods apart.
However, underruns and overflows do not actually occur until shortly before the write and read addresses are being simultaneously selected. Therefore, an impending overflow and/or underrun condition can be detected without risk of data corruption even when pointers for the write and read addresses are less than one multibit clock signal period apart. By providing a parallel buffer design in which overflow/underrun detection occurs later than in the serial designs, the elasticity buffer itself can be made smaller and simpler. In constrast, if the serial design is used for a parallel elasticity buffer, the buffer must be capable of storing at least one additional multibit data unit.
Furthermore, in a parallel buffer design having later overflow/underrun detection, the latency time of the buffer is reduced by making more of the buffer usable. By allowing pointers for the write and read addresses to be less than one multibit clock signal period apart, tolerances for the independent clocks in the nodes can be larger because more clock slippage is required before the buffer detects that an overflow and/or underrun condition is impending.
The principles discussed above apply to various types of wide and local area networks, including any packet data network which connects many repeater nodes that involves point-to-point clocking. Examples include nodes connected to a token ring network or to an Ethernet network connected with multiple repeaters.
A ring network consists of a set of nodes (stations) logically connected as a serial string of nodes and transmission media to form a closed loop. Information is transmitted sequentially, as a stream of suitably encoded symbols, from one active node to the next. Each node generally regenerates and repeats each symbol and serves as the means for attaching one or more devices to the network for the purpose of communicating with other devices on the network.
A network of particular applicability is the fiber distributed data interface (FDDI), which is a proposed American National Standard for a 100 megabit per second token ring using an optical fiber medium. The characteristics of FDDI networks are described in detail by Floyd E. Ross in "FDDI--A Tutorial," IEEE Communications Magazine, Vol. 24, No. 5, pp. 10-17 (May 1986), which is herein incorporated by reference.
Information is transmitted on an FDDI ring network in frames using a four of five group code, with each five-bit code group being called a symbol. Of the thirty-two member symbol set, sixteen are data symbols each representing four bits of ordered binary data, three are used for starting and ending delimiters, two are used as control indicators, and three are used for line-state signaling recognized by physical layer hardware. Each byte corresponds to two symbols or ten bits. (The term multibit data unit is used throughout the specification as a convenient way to refer to any unit of data exceeding one bit in length; the functioning of the invention is not limited to any particular number of bits in the data unit, and such units of data as symbols and bytes are included.)
The data transmission rate is 100 megabits per second for FDDI. A 125 megabaud transmission rate is required because of the use of a four-of-five code on the optical fiber medium. The nature of the clocking limits data frames to a maximum length of 4,500 bytes (i.e., 9,000 symbols or 45,000 bits). An FDDI network consists of a theoretically unlimited number of connected nodes.
In FDDI networks, every transmission of a frame is preceded by a preamble field, which consists of idle line-state bytes (symbols). In FDDI, an idle line-state symbol corresponds to the five-bit code group 11111. At the beginning of the frame, the preamble field of idle bytes is followed by a starting delimiter field, which consists of a two-symbol sequence JK that is uniquely recognizable independent of previously established symbol boundaries. The starting delimiter field establishes the symbol boundaries for the content that follows. The five-bit code group corresponding to the symbol J is 11000, and the code group corresponding to the symbol K is 10001.
For FDDI, the nominal clock rate is 125 megahertz but a frequency tolerance of plus or minus 0.005% is allowed. The maximum frame size is 4,500 bytes. Given these constraints, it is readily understood that passage of a single frame may result in the elasticity buffer in a repeater node filling or emptying at the rate of 4.5 bits per frame because of the maximum possible difference in clock frequencies in consecutive nodes in the network.
As has been described previously, the elasticity buffer in each node in a network compensates for any differences in rates of the clocks for consecutive nodes in the network. When initialization of the elasticity buffer occurs before a subsequent frame is repeated by a node, the node will either insert or delete bytes from the total number of bytes it transmits to the downstream node, depending on whether the clock in the upstream node is slower or fastern than the local clock for the node. By providing a preamble before each frame including at least a minimum number of idle bytes, the elasticity buffer can be initialized without any loss of data by only allowing addition or deletion of idle bytes in the preamble separating every pair of frames.
Therefore, in order to prevent allowable clock frequency differences from causing the elasticity buffer in a node from completely filling or emptying, the repeater node initializes its elasticity buffer by either expanding or shrinking the size of the preamble for the subsequent frame. Thus, one idle byte may be inserted in a preamble by a fast repeater node when it initializes to prevent its elasticity buffer from emptying, while one idle byte may be deleted by a slow repeater node when it initializes its elasticity buffer in order to prevent it from filling.
The FDDI network has a maximum frame size of 4,500 bytes and a clock tolerance of plus or minus 0.005%, so that a node will have to add or delete no more than 4.5 bits if it initializes its elasticity buffer following transmission of a frame. Therefore, additional bits of storage must be provided in the elasticity buffer to accomodate for differences in data transfer rates. Although a slippage of 4.5 bits reflects the maximum clock frequency differences from the nominal frequency for all stations in the network, this does not prevent the relative position of the input and output pointers from varying outside a range of 4.5 bits. Nodes do not add or delete fractions of bits from frames repeated to downstream nodes because of the technical complexity and the resulting addition to the jitter seen at the downstream node due to a frequency shift for the duration of one bit. Instead, the node rounds the number of bits it adds or deletes to the nearest whole bit, and these roundoff errors can accumulate along the network. Furthermore, standards for nodes connected to a network such as FDDI do not specify a maximum roundoff error, and designers therefore plan implementations of nodes that round to the nearest byte (ten bits) or symbol (five bits). This increases the size of the roundoff errors.
An elasticity buffer is therefore required which reduces the number of data errors that will occur due to metastability but which will also detect impending overflow and/or underrun conditions in an effective and efficient manner. Furthermore, the buffer must be practical for use in nodes coupled to any of a variety of data communication networks.
Thus, there is a need for a method and apparatus for detecting impending overflow and/or underrun of a parallel asynchronous elasticity buffer.