A packet switch network is a data network containing intelligent switching nodes, and has the following general characteristics: (1) prior to transmission, each data message is segmented into short blocks of specified maximum length, and each block is provided with a header containing addressing and sequencing information (each packet becomes the information field of a transmission at the link protocol level which usually contains error control capabilities); (2) because of their size, packets can be passed very quickly from switching node to switching node; and (3) the switching nodes do not archive the data, rather messages are generally "forgotten" by the sending node as soon as a next node checks for errors (i.e., if required), and acknowledges receipt.
Communication circuits which may be shared in such packet networks include transmission lines, program controlled processors, ports or links, and data or packet buffers. In large multinode networks, each node or packet switch accommodates many paths or links and terminates such paths which may extend to user terminal equipment or to other nodes of the network. A node may include one or more processors for controlling the routing and processing of packets through the node. The node is customarily equipped with a large number of buffers for storing packets in anticipation of such routing or awaiting availability of an output link. Each connection between nodes or extending to end users, typically serves a plurality of concurrent connections or sessions between a plurality of calling parties or machine terminals.
Depending on the situation, packet switching can offer several advantages over other data communication techniques; including: (1) for data applications in which the amount of traffic between terminals cannot justify a dedicated circuit, packet switching may be more economical than transmission over private lines; (2) for applications in which data communication sessions are shorter than a minimal chargeable time unit for a telephone call, packet switching may be more economical than dialed data; (3) because destination data information is inherently part of the packet, a large number of messages may be sent to many different destinations as fast as a source data terminal can issue them (depending on the type of packet service being used, there may not be any connection time delay between transmitting packets containing actual data); and (4) because of intelligence built into the network, dynamic routing of data is possible. Each packet travels over the route established by the network as the best available path for the packet at the time of the connection. This characteristic can be used to maximize efficiency and minimize congestion.
One problem in large packet communication or packet switching systems arises when many users attempt to utilize the network at the same time. This results in the formation of many paths or circuits for routing the data; and, resultingly, the communication facilities become congested and/or unavailable to a user or to the user's packet when it is being forwarded through the network. It has been found that congestion tends to spread through a network if uncontrolled. As a result, a number of flow control procedures, such as end-to-end windowing and link-by-link watermark flow controls, have been developed and commercially exploited.
A principal area of packet congestion is in buffers (or queues) of each switch node, particularly where the buffers become unavailable to store incoming packets. One solution to a buffer congestion problem is to halt all incoming traffic on all incoming lines to the affected node when the packet buffers become filled, or congested, and no buffer is available for storing additional incoming packets.
The simple end-to-end windowing scheme for flow control has advantageous properties when viewed strictly from the network periphery. Each machine can have many sessions simultaneously established between itself and various other machines. For each of these sessions (referred to as logical channels), a given machine is allowed to have `p` unacknowledged packets outstanding in the network, where `p` is some fixed integer chosen large enough to allow uninterrupted transmission when the network is lightly loaded. The greater the end-to-end network delay, the larger `p` must be. For example, a machine can initially transmit `p` packets into the network as fast as it desires; but it then can transmit no more packets (on that particular logical channel) until it has received an acknowledgement from the destination machine for at least one of those outstanding packets.
This scheme has several desirable properties. There is very little wasted bandwidth caused by the flow-controlling mechanism, because the number of bits in an acknowledgement can be made very small compared to the number of bits in the `p` packets to which it refers. There is also an automatic throttling that occurs under heavy load that divides network capacity fairly among all traffic sources. Finally, it provides automatic speed conversion between machines of different data rate because, for example, a destination can regulate the rate at which it acknowledges packets so that it will not be overwhelmed by too much data from an over-eager source.
A disadvantage of a pure windowing scheme is that it may frequently require an unacceptably large amount of buffer storage within a particular packet switch. To insure no loss of data, it is necessary to provide, at each buffer, or queue, in the network `cxp` packets of storage either (1) for every source whose packets might transmit to that queue, or (2) for every destination whose packets might be fed by that queue, where `c` is the maximum number of sessions that a source or destination is allowed to have simultaneously in progress. Since some buffers, or queues, may be positioned in such a way that they are fed by a large number of sources, or that they feed a large number of destinations, the amount of queuing required can be impractically large, especially if the packets contain more than just a few bytes.
Flow control utilizing a link-by-link watermark principle enables each node to keep track of its own queue length, and sends a "stop-sending" message upstream whenever the queue length exceeds some preestablished upper threshold. As soon as the queue length drops below a preestablished lower threshold, a "resume-sending" message is sent back upstream. The advantage of this scheme is that it is insensitive to the number and type of sources, and it results in the smallest possible queue requirements (because the delay between the sending of a stop-data message and the actual cessation of transmission is minimal). However, each node must know how many links feed each of its queues, and must be able to generate and send the "stop-sending" and "resume-sending" messages out on the appropriate links. Deadlocking is also a potential problem.
Illustratively, suppose that the next packet in a queue of a given node is destined for a downstream node B, and suppose that node B has sent node A a "stop-sending" message. Node A typically has links to many other nodes besides node B, and there may well be many packets in node A's queue destined for those other nodes. If node A's queue is implemented with a simple hardware FIFO, the blocked packet at the front of the queue will also block all subsequent packets in the queue, even though their respective outgoing links are available. In the extreme case where node B dies, node A can be indefinitely tied up; and the blockage can ripple upstream with the result that the failure of a single node can incapacitate a large portion of the network.
An isochronous source such as an audio or video source generates data packet as a fixed or nearly fixed rate. An isochronous receiver usually expects to receive a data packet within certain timing constraints, i.e., it has to arrive at the destination within a timing window. Otherwise, a buffer overflow or underflow condition will occur at the playback buffer, resulting in the loss of audio or video signals. Therefore, an isochronous connection usually has to maintain certain timing characteristics to ensure the appropriate delivery of each data packet.
Thus, a novel approach to avoiding buffer congestion is desired, particularly such an approach which allows an enhanced number of isochronous connections to be established through a switching node having a fixed amount of buffer capacity.