Communication networks are widely used today; the variety of networks includes the Internet, wide-area networks (WANs), local-area networks (LANs), telephony networks, and wireless networks. The importance of network monitoring and testing is growing as well as the requirements for the related methods and equipment.
Monitoring devices may be implemented within the network for monitoring communication along such network. Such monitoring devices are referred to as “eavesdropping devices” or “passive probes;” they are generally not a party to the communication but instead are monitoring such communication, e.g. for performance monitoring, testing, or other reasons. The elements that constitute the network may also act as eavesdropping devices because they may take traffic traveling through the device and replicate it on another egress port for use by monitoring or testing devices.
A test device for analyzing traffic packets may be attached directly to a monitor port or passive network tap at a switch or element.
Conventionally, a device in a network requires an IP address to communicate with it over an IP routed network. If a device doesn't have an IP address, it can only be communicated with on the local subnet by utilizing MAC level protocols. Some devices, like intelligent network taps, passively tap a network to provide access to the packets and therefore require an IP address and often a separate management network connection. There are disadvantages to having IP addresses on large numbers of devices and separate management networks due to cost and scalability. In order to minimize the total number of IP addresses required on a network, certain devices such as test devices may be not assigned a unique IP address. For communication with unaddressed test devices, a control device may rely on information about the network, its configuration and traffic flows. However, the communication may be complicated by dynamic load balancing within aggregated link groups (LAG) within the network.
Link aggregation is a computer networking term to describe various methods of combining (aggregating) multiple network connections in parallel to increase throughput beyond what a single connection could sustain, and to provide redundancy in case one of the links fails. Combining can either occur such that multiple interfaces share one logical address (i.e. IP) or one physical address (i.e. MAC address), or it can be done such that each interface has its own address. The former requires that both ends of a link use the same aggregation method, but has performance advantages over the latter. By the mid 1990s, most network switch manufacturers had included aggregation capability as a proprietary extension to increase bandwidth between their switches. But each manufacturer developed its own method, which led to compatibility problems. The IEEE 802.3 group took up a study group to create an inter-operable link layer standard in November 1997. The group quickly agreed to include an automatic configuration feature which would add in redundancy as well. This became known as “Link Aggregation Control Protocol”.
As of 2000 most gigabit channel-bonding schemes use the IEEE standard of Link Aggregation which was formerly clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force. Nearly every network equipment manufacturer quickly adopted this joint standard over their proprietary standards.
David Law noted in 2006 that certain 802.1 layers (such as 802.1X security) were positioned in the protocol stack above Link Aggregation which was defined as an 802.3 sublayer. This discrepancy was resolved with formal transfer of the protocol to the 802.1 group with the publication of IEEE 802.1AX-2008 on 3 Nov. 2008.
Within the IEEE specification the Link Aggregation Control Protocol (LACP) provides a method to control the bundling of several physical ports together to form a single logical channel. LACP allows a network device to negotiate an automatic bundling of links by sending LACP packets to the peer (directly connected device that also implements LACP).
Client load rebalancing allows the clients to optimize throughput between themselves and the resources accessed by the nodes. A network can dynamically rebalance itself to optimize throughput by migrating client I/O requests from over utilized pathways to underutilized pathways.
Client load rebalancing refers to the ability of a client enabled with processes to remap a path through a plurality of nodes to a resource. The remapping may take place in response to a redirection command emanating from an overloaded node, e.g. server.
The network may include LAG devices from a variety of vendors. In addition, different customers may configure their LAGs differently. By way of example, various parameters such as source/destination IP addresses or virtual local area network (VLAN) IDs may be used as hash keys for load balancing. Additionally, parts of the network may perform load rebalancing, which may further complicate communication between the central control device and the test devices. It would therefore be useful to provide a method of restoring communication with an unaddressed device in a network.