Link aggregation is a data networking term that refers to using a plurality of Ethernet network cables/ports in parallel in order to increase link speed beyond the capability of any single cable/port. Alternative terms for link aggregation include Ethernet trunking, Network Interface Card (NIC) teaming, port trunking, port teaming, and NIC bonding. LAGs are based on the IEEE 802.3ad standard. The various IEEE 802.3 standards are incorporated by reference herein.
LAGs provide a relatively inexpensive mechanism by which a high-speed backbone network is set up, the network transferring more data than a single port or device is capable of utilizing. This allows a plurality of devices to communicate simultaneously at their full single-port speed, while prohibiting any single device from monopolizing all available backbone capacity. Additionally, LAGs allow the backbone network to incrementally grow as network demand increases, without having to replace equipment.
LAGs allow for the grouping of a plurality of logical or physical links into a single logical link, thereby providing improved bandwidth flexibility and resource allocation. The IEEE 802.3ad standard allows for the aggregation of multiple Ethernet ports, such as 1 Gbps or 10 Gbps Ethernet ports, up to a maximum aggregated 80 Gbps bandwidth. For example, a plurality of GigE interfaces may be bundled together to present a single logical interface that controls the resources of all of the constituent links (e.g. two GigE interfaces may be bundled together to present a single 2 Gbps logical interface that controls the resources of the two GigE interfaces). Fiber optic backbones typically operate at 10 Gbps line rates. 10 GigE networking interfaces exist, the they are relatively expensive and are typically used for specialized tasks. LAGs are useful for grouping single GigE networking interfaces for transmission on a 10 Gbps backbone.
Typically, LAGs have been used in connectionless, best-effort environments in which there is no standard relationship between one packet or frame and the next packet or frame (e.g. there is no ordering relationship between one packet or frame and the next packet or frame), nor is there any need to reserve resources for said data. For example, in standard Ethernet environments, there is no data ordering guarantee from one frame to the next frame, nor is there any data delivery guarantee. It is assumed that higher layers have the ability to handle both the reordering of data into an ordered stream and packet or frame loss. This is not an issue in a traditional data network as there is no frame ordering requirement. For example, a computer operating a browser that is capable of viewing hyper-text markup language (HTML) documents reassembles frames in the correct order. A user may experience a delay associated with this process, but it is not critical as this is viewed as a best-effort environment.
In general, attempts are made to preserve data ordering via sub-flow identification. Data associated with a given sub-flow is transmitted across the same constituent link in order to guarantee data ordering. Data is received in order because it is transmitted on the same constituent link and each frame is received in the order it was transmitted. However, there is no standard mechanism for identifying sub-flows, with sub-flow identification left to individual implementations.
IEEE 802.3ad defines a Marker protocol to guarantee data ordering. This Marker protocol involves a sending side notifying a receiving side that it is about to move a sub-flow from one link to another. However, it does not specify flow identification or resource reservation for a given flow. Resource reservation allows physical resources to be reserved in order to provide connection-oriented functions, such as rate enforcement and quality of service guarantees.
With the advent of sequence-sensitive, connection-oriented traffic, such as Pseudowire Emulation Edge-to-Edge (PWE3) traffic, and its delivery across LAGs, data ordering and delivery guarantees become much more important, as these are properties of layer 2 connection-oriented services. PWE3 is an emulation over Ethernet of native services, such as Asynchronous Transfer Mode (ATM), Frame Relay (FR), Time Division Multiplexed (TDM), and Synchronous Optical Network/Synchronous Digital Hierarchy (SONET/SDH) services. Due to the characteristics of these services, out-of-order packets or frames are not tolerated by higher layers and call admission control during call setup is used to reserve resources in order to guarantee quality of service parameters.
Thus, what is needed is an improved mechanism by which sequence-sensitive, connection-oriented data is allocated resources, delivered, and received across LAGs.