Difficulty often exists in the design of a data network including at least one intelligent node wherein data must be moved between systems on the network. Data must be passed through many hardware and software components, each of which may impose a performance bottleneck. Design of a networking adapter to supervise transfer of data in such an environment requires an analysis of the targeted host system in which the adapter will reside. The designer accordingly will be required to consider both software and hardware interfacing issues.
Frequently, hardware architecture options are limited by the architecture of a pre-existing network operating system (NOS); the cost of tailoring the architecture of a NOS to a particular hardware architecture is prohibitive. However, to maximize the operating speed of the network it is preferable to minimize data movement in the memory of each station and to minimize intervention by the processor controlling the node.
There are two types of network operating systems. A type I NOS allows access to a flexible memory buffer pool to which or from which data is linked for particular applications. A type II NOS allows access to a fixed data transfer area for the data.
The type I NOS, manipulating pointers to link data to applications, has a higher overall system performance than a type II that requires data to be copied. The process of copying data during movement in memory represents a substantial loss in operating time as it increases system "overhead". If data is linked into applications directly from the area from which it was received, no copying of data is necessary and time is saved. It therefore is desirable that data be sent and received directly from system memory.
The issue underlying whether data can be sent or received directly from NOS buffers dictates the desirability of different hardware structures. There currently are three basic architectures for movement of network data within a host system, namely, (1) dual ported memory, (2) shared I/O ports and (3) intelligent bus master residing on a shared memory.
Dual port memory (DPM) is a method commonly used to interface high latency busses and also provides the benefit of isolating the network bandwidth from the system bus. (Bus latency is defined as the maximum time between the assertion by the controller of the request for the system bus and relinquishing the bus by the system.) A dual port memory allows the system bus to access the memory through one port, and the network controller to access it through the other port. The DPM decouples the local network controller bus requirements from the system bus. Because the high cost of DPM precludes building a large enough memory for processing packets in the DPM space, an economical application of DPM is to configure one to act as a large packet first in-first out (FIFO) memory between the host and the network controller. This configuration requires that the entire frame reside in DPM before the controller or host may act on it. Delay of data movement until the whole frame is in DPM will degrade overall performance.
The second network data movement architecture, which is equivalent to the DPM architecture, may be designed using low cost static RAM and a shared I/O port. However, the shared I/O port architecture must make use of either the DMA capability of the network controller, or a dedicated CPU to move data between the static RAM and the I/O port. Again, in this configuration the memory acts as a large packet FIFO and the system is shielded from the bandwidth and latency requirements of the bus controller. Shared I/O port architecture also requires that the entire frame reside in the buffer memory before the controller or host may act on it. This creates a delay that is detrimental to system performance.
The third architecture, intelligent bus master configuration, allows the network controller to take control of the system bus and read or write bursts of data directly into system memory during network reception and transmission. For a type I NOS, a bus master allows data to be received and transmitted from NOS buffers. A bus master minimizes the delay time that the data spends traversing between the system memory and the network cable, and as a result this method always has the highest put through for type I networks. Data received off the network cable is written into system memory immediately after it is deserialized and applied to an FIFO. Similarly, transmit data need only to be applied to an FIFO and serialized before being applied on the cable.
Current bus master network controllers restrict a designer by requiring that the host system guarantee a maximum bus latency that is less than the time it takes to fill the internal FIFOs of the controller with data from the network. The current generation of controllers also requires a guaranteed minimum data transfer rate for sustained reception and transmission.
In addition, the current generation of bus masters requires that the host always has free buffers available for storing received data. Reception of data from a network is asynchronous and a bus master must have free buffers allocated to store data as it arrives, or else the packet will be missed. The availability of free buffers is related to the type of network operating system that will be used. A type I NOS will be able to pass pointers to buffers fast enough to be able to accommodate the requirements of a bus master controller.
A type II NOS, that uses a fixed data transfer area, may or may not be able to empty and supply buffers fast enough for the real time requirements of the data rate on the network cable. A solution using a bus master is preferred for almost all network operating systems, except for certain type II systems which cannot provide NOS buffers fast enough for transmission and reception directly from them. The problem with these network operating systems is that network data must be buffered to a temporary location until an NOS buffer is free. If the network adapter is a bus master, the temporary location is in system memory. In a bus master configuration, the data must cross the system bus to the temporary location, then be coupled from the temporary location to the NOS buffers. If data is received and transmitted from memory located on the network adaptor, the "cross" of the system bus and the copy from the temporary location are combined into the same operation, saving a cycle on each transfer. On the other hand, efficient design of the network is severely restrained by variation of bus latencies as well as configurations of the data being transferred. Many systems cannot provide a low enough bus latency to allow the transmission and reception of data directly with system memory with existing bus masters, or worse, may not allow a controller to become a bus master. In such cases, an interface without the bus master restrictions is desirable.
One type of environment typically incorporating a bus master network, is a local area network (LAN) which allows a number of computers to share resources, including storage devices, programs and data files. Sharing of hardware such as disks, printers and connections to outside communications distributes the cost of hardware among participating devices. Discussion on the characteristics of local area networks is given in Appendix I to this specification.
A new standard of local area networking that is based on fiber optic components and systems has been developed by the American National Standards Institute (ANSI) X3T9.5 committee, known as the "fiber distributed data interface" (FDDI), defines a 100 megabit per second, time-token protocol implementing dual counter-rotating physical rings. Aspects of the FDDI standard, which defines the physical and data links of the OSI reference model, are summarized in Appendix II.
The high data rate of the FDDI standard imposes demanding timing requirements to the FDDI controller and to the system designer who uses the controller. As in other network operating systems, a desired architecture for network control is one that can become a bus master and can transfer data directly between the host/node processor memory and the FDDI network with minimal CPU intervention.
A current implementation of an FDDI network is embodied in the Supernet (TM) chip set, developed by Advanced Micro Devices, Sunnyvale California. The Supernet (TM) chip set which has the architecture shown in FIG. 1, consists of an encoder/decoder (ENDEC) AM7984 and a data separator (EDS) AM7985 100, coupled to the optical medium (network) through an optical data link (not shown). The ENDEC/EDS 100 extracts a receive bit clock from serial packets received from the network, extracting timing to perform decoding and converting the bit stream to parallel form.
Connected to the ENDEC 100 is a Fiber Optic Ring Medium Access Controller, or FORMAC, 102 implemented by a AM79C83 device which determines when a node can gain access to the network and implements the logic required for token handling, address recognition and CRC error handling. When a packet is received, the FORMAC 102 strips away all the physical layer headers before sending the packet to a data path controller, or DPC, 104 (AM79C82) after detecting and discarding any start-of-packet and end-of-packet delimiters. Frames are checked by the FORMAC 102 for destination address, and the DPC 104 is notified when a match does not occur.
A buffer memory 106 formed of random access memory (RAM) temporarily stores packets of data to be transferred between the system memory 108 of the host or node processor 110 and the network. A RAM buffer controller, or RBC, 112 implemented by a AM79C81, generates addresses to the buffer memory 106 for received and transmitted packets and carries out buffer management; the only interface of RBC 112 with the host 110 is through DMA request channels which allow the host to access the buffer memory. Interface logic 114 (sometimes termed "glue logic"), which resides between the host 110 and buffer memory 106, must be particularized to the characteristics of the system bus. Accordingly, if the chip set is to be applied to a network having a different bus latency or other characteristic, the interface 114 must be modified.
As in other bus master architectures, in addition to requirement of customizing the interface logic 114, the size of the buffer (FIFO) 106 in the FDDI network that is required to compensate for bandwidth, speed and latency mismatches between the system bus and the network bus depends on several parameters including the bandwidth, i.e., clock rate, allowed bus occupancy and bus cycle time, of the user bus, whether use of FDDI is full- or half-duplex, the average latency of the user bus, number of accesses per burst and the size of fragments, i.e., partial frames, on the network. Experiments have shown that the size of the FIFO varies in size substantially, i.e., by an order of magnitude as a function of rather minimal changes in system parameters. As only a limited amount of FIFO can be incorporated on-chip with other components of the chip set, and the bandwidth of FDDI is high, external FIFO is required to accommodate for high bus latencies. On the other hand, to provide an amount of external FIFO sufficient to accommodate high latency applications would be burdensome in low bus latency applications which require only a limited amount of FIFO. It would be advantageous to provide some means to enable the amount of FIFO to be provided as a component based on expected bus latency for a desired application. In this manner, it would be possible to implement the required amount of FIFO directly on-chip with other components of a bus master chip set in low bus latency applications, and in high latency applications add only the amount of external FIFO required to avoid loss of packets.