The ready ability for a business to store, process and to transmit data is a facet of operations that a business relies upon to conduct its day-to-day activities. For businesses that increasingly depend upon data for their operations, an inability to store, process, or transmit data can hurt a business' reputation and bottom line. Businesses are therefore taking measures to improve their ability to store, process, and transmit data, and to more efficiently share the resources that enable these operations.
The ever-increasing reliance on data and the computing systems that produce, process, distribute, and maintain data in its myriad forms continues to put great demands on techniques for data communication. As a business' data network grows, the business' reliance upon the functionality of that data network grows as well. Growth in a data network occurs as the business grows and as computing needs increase.
As a data network increases in size, both the amount and types of traffic supported by the network increase. Different computing systems can use different kinds of network protocols to talk with one another and certain protocols are more appropriate for particular types of network interactions. As a data network increases in size, the network can be subdivided into smaller, more easily manageable sub-networks that communicate with one another via network communication nodes such as bridges, routers, network access servers, and network concentrators. A network's capacity to handle the amount of traffic it is expected to support is generally carefully monitored because exceeding a network's bandwidth can slow the systems supported by the network and hinder a business' ability to conduct its operations.
Network protocols have been developed that use a series of requests and responses during a process of establishing a network connection between two or more nodes on the network. During this request-response phase of establishing a connection, configuration data can be exchanged between the network nodes along with responses to such configuration information indicating whether such a configuration is acceptable to a network node. Protocols having an exchange of requests and responses also associate a timer with a particular request packet so that should a response not be received in a set time (e.g., due to network congestion or a lost packet) a new request packet will be sent to avoid a system stalling in the process of establishing a connection. Examples of network protocols that incorporate a request-response configuration phase are the point-to-point (PPP) protocol, the remote authentication dial-in user service (RADIUS) protocol, and Unix inter-process communication (IPC) protocol.
FIG. 1 is a state diagram of a PPP network connection and is presented as an aid to understanding the use of a request-response phase for configuration of a network link. A dead state 110 is the state in which a network link starts and ends in PPP. PPP is a peer-to-peer network protocol in which an originating node attempts to establish a connection with a remote peer node. Detection of a carrier signal at the remote peer node transitions the link to proceed to the next state (i.e., the link is “up”).
The next state is link establishment 120, wherein a first exchange of request-response configuration packets occurs. In PPP, link establishment packets are called link control protocol (LCP). An originating node transmits a configuration request (Configure-Request) packet to the remote node and then waits for a responsive packet from the remote node before proceeding with the connection.
FIG. 2 illustrates a typical PPP packet used in the link establishment state. PPP packet 200 includes (i) a code 210 identifying the packet (e.g., as a Configure-Request), (ii) an identifier field 220 to aid in matching requests and replies, (iii) a length field 230 indicating the length of the Configure-Request packet, and (iv) a list of configuration options 240 that the originating node desires to negotiate for the connection.
Once the remote peer node receives a Configure-Request packet, the remote peer node can then send a response to the request. The response can take the form of an acknowledgement (Configure-Ack), a non-acknowledge (Configure-Nak), or a rejection (Configure-Rej). Identification of the type of responsive packet is provided in code field 210. Since the requesting node must wait for a response before proceeding with the configuration of the network connection, a timer is started upon the transmission of the request. The timeout period is configurable dependent upon the implementation in which the protocol is being used. Once a timeout period is reached, the originating node sends a new Configure-Request packet and the timer will restart. In some network protocol implementations, there can also be an associated maximum number of such retries that a node will attempt before giving up on a connection. In certain network protocol implementations, in order for a link to be successfully established, an acknowledgement packet must be received with an appropriate identifier and a reiteration of the configuration options as sent by the originating node in the request packet.
Once a link is established, PPP can go to an authentication state 130. This is an optional state in which authentication information can be exchanged between the nodes, including, for example, an exchange of passwords.
Once a link is established, and optionally authenticated, the link enters a network layer protocol state 140. This is another state in which request-response configuration packets are exchanged. In this state, the configuration packets are called network configuration protocol (NCP) packets. Requests and responses are exchanged between the nodes to establish the network layer protocol that will be encapsulated within exchanged PPP packets. The exchange of request and response packets in this state is similar to that in establishment state 120, and the format of packets is similar to that presented in FIG. 2. The final state occurs when it is desired to terminate the link between the nodes. The link then enters a link termination state 150. An exchange of terminate-request and terminate-acknowledgement packets is made between the nodes and the link is taken down.
Network communication nodes such as routers and network concentrators can handle communications between segments of a large network. Such a network communication node can be responsible for establishing and providing tens of thousands of network connections.
FIG. 3 is a block diagram illustrating such a network communication node. In this depiction, network communication node 300 includes a number of line cards (line cards 302(1)-(N)) that are communicatively coupled to a forwarding engine 310 and a processor 320 with a processor input queue 325 via a data bus 330 and a result bus 340. Line cards 302(1)-(N) include a number of port processors 350(1,1)-(N,N) that are controlled by port processor controllers 360(1)-(N). It will also be noted that forwarding engine 310 and processor 320 are not only coupled to one another via data bus 330 and result bus 340, but are also communicatively coupled to one another by a communications link 370.
When a packet is received, the packet is identified and analyzed by a network communication node such as network communication node 300 in the following manner, according to embodiments of the present invention. Upon receipt at a port, a packet (or some or all of its control information) is sent from one of the port processors 350(1,1)-(N,N) corresponding to the port at which the packet was received to one or more of the those devices coupled to the data bus 330 (e.g., others of port processors 350(1,1)-(N,N), forwarding engine 310, or processor 320 via processor input queue 325). Packet processing according to the present invention can be performed, for example, by a process running on processor 320 on packets stored in processor input queue 325.
In addition, or alternatively, once a packet has been identified for processing according to the present invention, forwarding engine 310, processor 320, or the like, can be used to process the packet in some manner or add packet security information, in order to secure the packet. On a node sourcing such a packet, this processing can include, for example, encryption of some or all of the packet's information, the addition of a digital signature or some other information or processing capable of securing the packet. On a node receiving such a processed packet, the corresponding process is performed to recover or validate the packet's information that has been thusly protected.
In a network using protocols that incorporate request-response packets, such as PPP, there can be many thousands of requests pending the exchange of responsive packets. Network communication nodes typically have one process designated to process all incoming packets (e.g., processor 320), which include packets responsive to pending requests. As incoming packets arrive at the network communication node, the incoming packets are queued for packet processing by the designated process (e.g., in processor input queue 325).
Due to a potentially large number of pending requests in a large-scale network and therefore a corresponding potentially large number of responsive packets enqueued for packet processing, a response timer for a request can time out before an enqueued response packet can rise to the top of the processor input queue. When this happens, a subsequent request packet can be transmitted with a new identifier to the remote peer node. Thus, when the enqueued responsive packet is ultimately processed by the packet processor, the responsive packet will have a stale identifier and will be rejected. The remote peer node will now respond to the second request packet. But since the second response packet will be put at the end of the input queue, the packet processor may not process the second response packet until after the response timer has timed out for this second request, and therefore a third request will be sent out. In such a scenario, it can be seen that under certain high load conditions, an input queue may never be drained and will continue to expand. It is therefore desirable to provide a means to avoid retransmission of a currently pending request if a packet responsive to the request is present in the input queue.