Today's network links carry vast amounts of information. High bandwidth applications supported by these network links include, for example, streaming video, streaming audio, and large aggregations of voice traffic. In the future, network bandwidth demands will increase. Certain applications, such as streaming audio and streaming video, can generate a large amount of network traffic due to sending such a transmission to multiple subscribers. In order to transport such large amount of data, network routing and switching devices need to be able to accept this large amount of data on a physical interface port and internally communicate that data to a switching matrix from a line card coupled to the network ports.
In a data communication network, network routing and switching devices receive messages at one of a set of input interfaces and forward those messages on to one or more of a set of output interfaces. Users typically require that such routing and switching devices operate as quickly as possible in order to keep pace with a high rate of incoming messages. In a packet-routing network, where information messages are transmitted in discrete packets of data, each packet includes a header. A routing or switching device uses the header information for routing the packet to an output interface for subsequent forwarding to a destination device. A routing device can forward a packet to another routing device for further processing or forwarding.
FIG. 1 is a simplified block diagram of a generic packet routing device 100. In FIG. 1, network device 100 is a router, but much of the description below can be applied to any network device utilized in the transfer of data in a network (e.g., a switch, a bridge, or a network storage processing device). Similarly, the concepts presented below in the detailed description of the invention section can be applied to any network data transfer device.
Network device 100 includes a number of line cards 105(1)-(M), each having similar circuitry and each coupled to a switch fabric 180. Herein, line card 105 refers to any of line cards 105(1)-(M), unless otherwise specified. Various hardware and software components associated with network device 100 are not shown in order to aid clarity.
In FIG. 1, line card 105 transmits and receives datastreams to and from clients (not shown) coupled to a local network 110. Incoming datastream packets from network 110 are received by ports 120(1)-(N) on line card 105. From ports 120(1)-(N), the packets are transferred to a receive port ASIC 130. The receive port ASIC 130 can transfer the packets to a receive forwarding ASIC 150 via point-to-point interfaces 140 and 145. From receive forwarding ASIC 150, packets can be transferred to receive processor module 160 and subsequently switch fabric interface 170. Such transfers can also be performed by point-to-point interfaces similar to 140 and 145. Generally, switch fabric interface 170 can convert a datastream from one format (e.g., packets) to another format (e.g., common switch interface cells). From switch fabric interface 170, converted packets are transferred to switch fabric 180. In a similar fashion, packets can be transferred from switch fabric 180 to client devices coupled to network 110 via a transmit path that includes the switch fabric interface 170, transmit processor module 165, transmit forwarding ASIC 190 and transmit port ASIC 135.
Using present network technology, ports 120(1)-(N) can receive data at rates in excess of 10 Gb/s. Since multiple ports can simultaneously supply datastreams to receive port ASIC 130, it is desirable that receive port ASIC 130 and an interface from that ASIC intended to transmit the datastreams be configured to support such high rates of data transfer. If the point-to-point interface from 140 to 145 cannot support a transfer rate sufficient to handle the incoming data from ports 120(1)-(N), then the point-to-point interfaces can become a data bottleneck in the line card.
FIG. 2A is a simplified block diagram illustrating a port ASIC 210 (such as receive port ASIC 130) and a forwarding ASIC 220. Port ASIC 210 receives data packets on input interfaces 230(1)-(N), wherein each of the input interfaces can correspond to a physical port (e.g., ports 120(1)-(N)). Port ASIC 210 can in turn transmit the data packets via a point-to-point interface 240. Data packets transmitted from point-to-point interface 240 can be received by the forwarding ASIC 220 at interface 250 and then processed accordingly. An example of a point-to-point interface 240 is a System Packet Interface Level 4 Phase 2 (SPI-4.2) interface. In such a configuration as illustrated in FIG. 2A, the data-throughput bandwidth of a system including port ASIC 210 and forwarding ASIC 220 is limited by the bandwidth of interfaces 240 and 250. As an example, an SPI-4.2 interface can be 16 pairs of approximately 800 Mb/s signals resulting in 12.8 Gb/s of total bandwidth in current art.
FIG. 2B is a simplified block diagram illustrating an alternative interface coupling between a port ASIC and a forwarding ASIC. In FIG. 2B, port ASIC 215 is coupled to forwarding ASIC 225 via a plurality of point-to-point interfaces 245(1)-(N) to interfaces 255(1)-(N), respectively. Each point-to-point interface 245(1)-(N) can correspond to a port interface 230(1)-(N). Therefore, each incoming port has its own channel from port ASIC 215 to forwarding ASIC 225. While the scheme illustrated in FIG. 2B provides a greater total bandwidth between port ASIC 215 and forwarding ASIC 225 than that illustrated in FIG. 2A, each individual data path can still become bandwidth limited when a single port receives a burst of high-bandwidth traffic that exceeds the bandwidth of its associated channel between port ASIC 215 and forwarding ASIC 225. Although there is a large total bandwidth between the port ASIC and the forwarding ASIC, there is no means to share available bandwidth on any channel with a channel that has become bandwidth limited.
Solutions that have been traditionally used to address the bandwidth-limitation problem in point-to-point interface connections can be difficult to implement or are of limited utility. One solution has been to drive the point-to-point interface, and therefore the ASIC, at a higher frequency (e.g., 2.4 Gigahertz). Higher frequencies, however, are more difficult to implement with current ASIC technology. Another traditional solution has been to attempt to balance the load over the multiple channels based on a hash function related to traffic flow characteristics. However, a hash result does not necessarily provide full load balancing at any given point in time. Another scheme to solve bandwidth limitation problems aligns two point-to-point interfaces in parallel to effectively provide a single wider data conduit. Such a scheme can have significant skewing problems that require frequent aligning of data through the use of alignment control words. As frequencies scale up, more alignment control words are sent and utilization of the available bandwidth from the parallel interfaces becomes less.
What is therefore desired, is a mechanism of point-to-point communication that provides a higher usable data bandwidth to avoid a data bottleneck at the point-to-point communication interface. It is further desired, that such a method reduce wasted bandwidth through the use of load balancing among all available point-to-point data paths.