1. Field of the Invention
The present invention relates to a high performance data transfer bus used to interconnect subsystems, and more particularly to a bus for use within network bridges, routers, switches, computer workstations, or personal computers.
2. Description of Related Art
Most computer based systems include busses for transferring digital data. Such busses may be used for transfers between either closely or loosely coupled components. One example of a bus used to transfer digital data between closely coupled components is the bus used by some microprocessors to access instruction and data memory. This type of bus is commonly referred to as a processor bus. Conventional busses, such as VME, EISA and Multibus II are examples of busses used to transfer digital data between more loosely coupled components within a computer, such as disk controllers and local area network interfaces. Proprietary busses are also frequently used to transfer data between loosely coupled components, especially in networking devices.
In most cases, the devices connected to a bus perform dedicated functions and are somewhat autonomous from other devices, though functions performed by these devices usually involve the exchange of data with other devices on the bus. FIG. 1 shows two simple workstations 100a, 100b, each containing a processor device 110, local area network interface 120 and disk controller 130 connected with a bus 140. The two workstations are connected by the local area network cabling 150. An example of an exchange of data from workstation 100a to workstation 100b includes the following steps: (1) creating a data file using a first processor 110a; (2) transferring the data file from the first processor to a first LAN interface 120a over bus 140a; (3) preparing and transmitting the data file over a local area network 150; (4) receiving the data file in a second LAN interface 120b; (5) transferring the data file over the bus 140b to a second processor 110b; (6) reconstructing and processing the data file using the second processor 110b; (7) transferring the processed data file from the second processor 110b over the bus 140b to a disk controller 130b; and (8) storing the file.
In many systems, the transfer of data from the second LAN interface 120b to the second processor 110b is concurrent with the transfer of data from the second processor 110b to the disk controller 130b. However, since the bus 140b can accommodate only one transfer at a time, such transfers may not occur simultaneously. That is, the second LAN interface 120b may transfer a portion of the file to the second processor 110b, then the second processor 110b transfers processed data to the disk controller 130b while the second LAN interface 120b is waiting for additional data to be received from the first workstation 100a.
It is likely that at times a transfer will be occurring between the second processor 110b and disk controller 130b at a time when the second LAN interface 120b is ready to make a transfer to the second processor 110b. Therefore, the second LAN interface 120b requires access to the bus 140b at the same time that the transfer between the second processor 110b and the disk controller 130b is taking place. Because of the possibility of such contention, an arbitration method is provided to determine when a device is allowed to transfer data over the bus 140. FIG. 2 shows the components of a conventional bus connected device, such as a LAN interface 120. The bus 140 can be divided into two portions, an arbitration bus 240 and a data bus 250. The bus arbitration method is supported by bus interface logic 200 within the LAN interface 120. A transfer out from the LAN interface 120 starts when device logic 230 places data into one or more data buffers 220 within the LAN interface 120, and then signals the bus interface logic 200 that a transmit operation is required. The bus interface logic 200 then requests use of the bus 140 over the arbitration bus 240. When permission to transmit is received, the bus interface logic 200 reads the data from the buffers 220, and places the information on the data bus 140 through high current bus transceivers 210. Bus transceivers 210 are required which have high current drive capability because of the electrical characteristics of most busses. The drivers must be provided in the bus transceivers 210, rather than being integrated in the VLSI bus interface logic 200, because of the logic limitations of VLSI implementations of the bus interface logic 200. That is, the VLSI chips on which the bus interface logic 200 is typically fabricated do not have the ability to supply the current required to drive the bus 140.
It is important that data be transferred over the bus efficiently. To provide system performance, the transfer rate on the bus should be faster than the rate at which any one device on the bus can produce or consume data. Accordingly, the relative speed of the bus 140 with respect to devices on the bus 140, and the maximum size of a transfer over the bus 140, requires that the devices on the bus 140 include data buffers 220. This permits a transmitting bus device 120 to accumulate data in the data buffers 220 and burst the data across the bus 140 at the highest available bus rate. Likewise, a receiving bus device can receive the burst of data at the highest available bus rate in the data buffers 220 and access the dam from the data buffers 220 at the slower device speed. Usually, a bus arbitration procedure includes having the destination device allocate the receive data buffer 220 needed for the transfer.
Systems that use a bus to transfer data between devices have been improved over time, resulting in faster data transfer capability. At some point, it becomes advantageous to increase the speed of the bus. Therefore, it is important to provide a degree of scalability within the bus (capability to adapt to different speeds), while keeping the interface 200 between the bus and the device 230 connected to the bus simple and easy to implement.
However, several problems are encountered with conventional busses. For example, the speed of a bus is limited by a number of factors, the most fundamental of which is the electrical drive requirements of the bus (i.e., the amount of current required to drive the bus high or pull the bus low in a fixed amount of time). A device that is transmitting information over a bus must have sufficient drive capability to drive the load presented by all of the receivers on the bus. Conventional busses that allow a relatively large number of devices to be interconnected over the bus require use of either high current drivers on each bus interface logic chip, or external transceivers 210 (as shown in FIG. 2), capable of driving the bus. Using high current drivers on a VLSI chip increases the cost and complexity and reduces the reliability of the VLSI chip. External transceivers increase both the cost of the system and the area (i.e., real estate on a printed circuit board, etc.) required to implement a device.
Also, the turn-off and turn-on time of the drivers must be properly accounted for to ensure that damage does not occur to a bus driver due to more than one driver being turned-on at the same time. For example, if one driver is attempting to pull the bus down while another driver is attempting to drive the bus high, the current through the drivers may cause damage to one or both of the drivers. Accordingly, when a first device stops transmitting and a second device starts transmitting, time must be provided for the first device to turn off its drivers before the second device turns on its drivers. Otherwise, the reliability of the system is affected. The time required to ensure that such overlap does not occur does not scale with the speed of the bus. Thus, the bus performance does not scale in proportion to the speed of the bus. Accordingly, driver turn-on and turn-off time is a significant barrier to higher performance in conventional busses. Furthermore, at each clock speed, the performance is affected by chip delay, number of devices on the bus, etc.
Furthermore, conventional busses typically rely upon buffer management which is done in software. Software implemented buffer management is very flexible, but limits device and system performance. For example, allocating a buffer to receive data in accordance with a conventional software buffer management scheme requires a query and possibly a response over the bus. Conventional techniques also require the polling of a semaphore bit or communication with a centralized buffer manager to perform buffer allocation. This overhead slows the maximum transfer rate of a device and adds significant complexity.
A further problem with conventional busses is that access latency (i.e., the amount of time a device may have to wait before being granted the bus) may become excessive. For example, in conventional busses, one way of locking out other devices from using a buffer is to transfer all the data that will be stored in the buffer without giving up the bus to another device. This creates high access latency for other devices. In many systems, it will also add to the latency of the end-to-end transfer of data through the system. This is especially true with devices that can begin processing received data before completion of the transfer.
Therefore, it would be advantageous to provide a high performance data transfer bus capable of interconnecting up to hundreds of devices without the need for high current drivers at every interface. It would also be advantageous to provide a technique to minimize bus access time and simplify lock-out of other devices from a buffer at a receiver. Still further, it would be advantageous to remove driver turn-on and turn-off time as a significant barrier to high performance on a data transfer bus. Still further, it would be advantageous to provide a high performance data transfer bus which includes buffer management that is simple and easy to implement in hardware.