In general, packet-forwarding device functions may be characterized into at least two types, data path functions and control functions. Data path functions include operations that are performed on every datagram that passes through the packet-forwarding device, such as a router, where a datagram is an independent, self-contained message sent over the network whose arrival, delivery time, and content are not guaranteed. During the typical path of a packet through an IP router or network switch, the data path functions include the forwarding decision, the backplane, and output communication channel scheduling.
In contrast, control functions typically include operations that are performed infrequently relative to the data path functions. As a result many control functions are implemented in software and firmware. Exemplary control functions include the exchange of routing table information internally and with neighboring routers, as well as delivering quality of service information, or other system configuration and management information. The occasional control function received from an external device, such as a remote terminal or server, adds to the coordination complexity, as control functions received on the data plane must be converted for transmission across the control plane.
Because of the irregular nature of many control functions, there is a tremendous difference in the time constraints associated with various control functions. In fact, the speed requirements of many control functions vary by several orders of magnitude. For example, the exchange of updated routing table information within the packet-forwarding device may occur at Megahertz (MHz) and Gigahertz (GHz) frequencies while monitoring the operational parameters of the fans within the packet-forwarding device need only occur at Kilohertz (kHz) intervals. These irregularities create overhead that drains valuable resources from the processor unit.
Presently, most routers use shared buses or shared-memory backplanes for data path and control functions. Unfortunately, these shared buses, which share the communication channel between multiple functions, easily become congested under modern switching demands, especially if the bus bandwidth doesn't match the aggregate data rate of the ports and processor unit Input/Output (I/O), thus limiting the performance of the system. In the past, the computer industry has simply developed a faster shared bus as the need arose, thus the shared bus has evolved from Industry Standard Architecture (ISA) to Extended Industry Standard Architecture (EISA) to the modern Peripheral Component Interconnect (PCI).
Unfortunately, continuing this pattern of development with regards to shared backplanes is impractical for several reasons. One reason is that a shared bus reduces the overall reliability of the packet-forwarding device. As control functions must pass across the shared bus, it becomes a single point of failure that potentially shuts down the entire packet-forwarding device. Even worse, a failed shared bus may introduce erratic undetectable errors, which alter the data being transmitted through the packet-forwarding device causing the data to be corrupted.
Another reason is low scalability of shared bus architectures. The scalability or transfer-capacity of a shared bus is limited by several factors including electrical loading, the number of connectors that a signal encounters, and the reflections from the end of unterminated lines. In addition, scalability of the shared bus is often limited by congestion on the shared bus. Specifically, the bandwidth of the bus is shared among all the attached devices so that any contention between attached devices leads to additional delay for control information being sent across the shared control bus. If the rate of control information exceeds the bus bandwidth for a sustained period, buffers risk overflow-errors and loss of data.