In a networked computer system, a host computer, often referred to as an "end station", is connected to a network via a network adapter. The host computer and network adapter exchange control information and data via a system bus, such as a PCI bus.
In the field of network adapter design it is desirable to maintain high performance in a cost effective manner. One of the most critical factors in adapter cost is the amount of on-board memory required on the adapter. Highest performance is achieved when large amounts of adapter-resident memory is provided. The host can then DMA large amounts of data at a time to adapter-resident memory to await transmission. The performance degradative effects of system bus latency are thereby minimized. This high performance, however, is traded off against the increased cost of the adapter due to the inclusion of expensive adapter-resident memory.
The performance/cost tradeoff is particularly problematic for Asynchronous Transfer Mode (ATM) adapters. ATM is a networking technology used in a variety of telecommunications and computing environments. ATM is designed to support users having diverse quality of service (QOS) requirements. ATM therefore supports a number of different service types including constant bit rate (CBR), variable bit rate (VBR), available bit rate (ABR), and unspecified bit rate (UBR). ATM is a cell based technology; that is, compared to common packet technologies such as X.25 or frame relay, ATM transfers short, fixed length units of information called cells. Accordingly, larger packets or protocol data units (PDU's) from a source are broken up into these fixed length cells for transmission, and then reassembled at their destination.
ATM is a connection oriented networking technology, meaning that all communications between two stations occur via a virtual circuit (VC) which is initially established between them. In order to control the transmission of cells from an ATM adapter over multiple VCs, it is known to employ a scheduling process which scans a schedule table having multiple entries. Each entry in the table represents a given VC for which buffered data is to be transmitted at a given time in accordance with the QOS parameters for the VC. In an ATM adapter employing adapter-resident memory for buffering the data to be transmitted, the data is read from the memory and transmitted on the network as each entry is read.
An end station may have several separate VCs established between itself and another end station with which it is communicating; moreover, it may be communicating with several such end stations at once. A well-designed ATM network adapter should therefore be capable of supporting transmission and reception of data over large numbers, on the order of thousands, of VCs at once. A high performance ATM adapter containing adapter-resident memory for buffering data to be transmitted and received must then contain sufficient memory to buffer data for every active VC. The cost of such an adapter then becomes prohibitive, particularly where the adapter is designed for use in desktop or workstation host computers.
The alternative to such an adapter architecture involves buffering data to be transmitted and/or received in host memory, while only a small number of cells are buffered at any one time in adapter-resident memory. This greatly decreases the amount of memory required on the adapter and thereby substantially reduces its cost. System bus latency now, however, becomes a major factor in the maximum transmission delay. That is, for each entry scanned in the schedule table, data must be transferred from the host to the adapter via the system bus with all its attendant delays. When variable system bus latency must be factored into the maximum transmission delay, the data associated with each schedule table entry may not necessarily be available for transmission until well after the scanning of that entry, thus resulting in potential performance problems. Furthermore, though VCs are entered into the schedule table at locations in accordance with their QOS parameters, cell data is DMA'd from the host into the adapter-resident transmit buffer such that cells reside back-to-back in the buffer. That is, even if a VC is scheduled at every other location in the schedule table such that it should use 50% of the available network link bandwidth, once the cells for the VC are DMA'd into the transmit buffer they will be transmitted from the buffer in a back-to-back manner, thus utilizing 100% of the link bandwidth for a period of time. It is therefore desirable to provide a high performance low cost network adapter with a small amount of adapter-resident memory and a means for assuring that cells are transmitted from that memory onto the link in accordance with their intended QOS parameters.