The present invention is best understood in the context of a network adapter. A network adapter is a device well known in the art for connecting a digital computing device, such as a personal computer, to a communications network such as a local area network, or LAN.
One type of network adapter is discussed in coassigned U.S. Pat. No. 5,412,782, which is incorporated herein by reference. As described in that application, a network adapter generally connects to a multipurpose host computer data bus through programmed I/O (PIO), possibly with a direct memory access (DMA) mode available as a backup for receive operations. The described adapter generally was designed to operate on a network standard known as 10BaseT Ethernet, which operates at maximum network data speeds approaching 10 Megabits per second (Mbps).
Various aspects of other types of network adapters, some of which may be considered modifications or improvements of the just discussed adapter, are described in coassigned U.S. application Ser. No. 08/296,577 filed Aug. 08, 1994, now U.S. Pat. No. 5,640,605, which discusses an adapter for operating at 100 Mbps (100BaseT), coassigned U.S. application Ser. No. 08/544,745, now U.S. Pat. No. 5,715,287, which discusses an adapter connector for operating at both 10BaseT and 100BaseT, and coassigned U.S. application Ser. No. 08/641,399, filed Apr. 30, 1996 and entitled PACKET FILTERING BASED ON SOCKET OR APPLICATION IDENTIFICATION. Each of these pending related U.S. applications are incorporated herein by reference.
In the past several years the capabilities and speed of typical host computer systems placed on networks have improved dramatically and the demands placed on those systems have increased. The speed of networks has also increased. There is therefore a need for network adapters to evolve to handle higher data speeds and to more effectively operate on a host's system bus.
One modification made to high performance adapters is to modify how those adapters operate on the system bus. Higher performance adapters today can often act as bus master devices, rather than as PIO bus devices. Bus master devices can generally transfer more data while using less time on the system bus than can PIO devices.
Today, high performance system buses, such as EISA, and PCI generally allow bus master devices to operate in a burst mode. In burst mode, a bus master is allowed to transfer a high amount of data over the bus, with one bus word (up to four bytes) being transferred every bus cycle without interruption. This can provide the most efficient use of bus resources.
A problem network adapters have, however, in fully utilizing a burst mode on a system bus is that networks operate at a effective data speed that is generally much less than the system bus transmission speed. Therefore, a "burst mode" read operation on a bus will quickly drain any data received from the network on the adapter, and the network cannot deliver data to the adapter quickly enough to meet the burst mode requirements. For this reason, when moving a block of data between a burst mode host computer bus and a network, it is necessary to buffer data on the adapter to obtain the optimal throughput performance.
This problem can be thought of in generalized terms as arising whenever data is exchanged between two different computer systems operating at different speeds. These two areas of the computing environment can be referred to generally as two different clock domains. Referring to the example just discussed, the host bus can be thought of as representing one clock domain and the network and the adapter interface circuitry as another.
There are two common approaches for buffering data when transferring it between two clock domains: a FIFO (First-In-First-Out) memory or a high speed "dual-port" RAM (Random Access Memory), either of which be accessed by either clock domain in a random manner.
FIFOs are a well understood memory structure consisting of memory locations and read and write pointers that address one of the possible locations within the FIFO. When using a FIFO memory, as data is put into the FIFO, some incremental delay later it becomes available to be read out of the FIFO. FIFOs generally require a large number of circuit gates to implement because data must be able to be read from or written to the FIFO starting at any location in the memory. This requires a complex pointer and control structure that is difficult to design. A major part of a FIFO design entails always keeping track of the relationship between the read and write pointers and ensuring that there is either room for data to be written into the FIFO, or that data being read from the FIFO is valid.
The complexities of FIFO design and control are increased when an adapter interfaces to a host bus which supports a burst mode, especially when the bus master cannot pace the transfers, as is the case with EISA--Extended Industry Standard Architecture. This is because the EISA Specification does not allow the bus master to insert "wait" cycles on the bus once a burst transfer has begun and it is complex to update and synchronize the FIFO pointer updates at the fast burst rate demanded by the bus.
An alternative prior art approach to using a FIFO is using a "dual-port" RAM, but this also has disadvantages. A Dual-port RAM is merely a more general implementation of the above FIFO and therefore suffers from the same read write pointer, data valid complexities.
What is needed is a mechanism that will allow an adapter or other device moving data between two clock domains to efficiently transmit data in a burst mode on the faster clock domain without the design and control complexities of prior art solutions.