Data and storage communication networks are in widespread use. In many data and storage communication networks, data packet switching is employed to route data packets or frames from point to point between source and destination, and network processors are employed to handle transmission of data into and out of data switches. An example of a network processor is disclosed in commonly-assigned patent application Ser. No. 10/102,343, filed Mar. 20, 2002. This commonly-assigned patent application is incorporated herein by reference in its entirety.
A network processor typically includes a scheduler circuit which determines an order in which frames are transmitted by the network processor.
FIG. 1 is a block diagram that shows a conventional scheduler circuit for a network processor, together with external memories utilized by the scheduler circuit. In FIG. 1, reference numeral 10 generally indicates the scheduler circuit. The scheduler circuit 10 includes an interface (I/F) block 12, a first-in-first-out (FIFO) buffer 14, a queue manager block 16, a calendars block 18, a “winner” block 20 and a memory manager block 22. Coupled to the memory manager block 22 are external memories 24, 26.
The interface block 12 handles exchanging of messages between the scheduler circuit 10 and a data flow circuit (not shown) to which the scheduler circuit 10 is coupled. As is familiar to those who are skilled in the art, the data flow circuit handles the actual data to be transmitted, whereas the scheduler circuit 10 works with frame pointers that indicate the location of the data in a data flow memory (not shown), and instructs the data flow circuit on which data to transmit.
The FIFO buffer 14 is provided to buffer incoming messages for the scheduler circuit 10, and is coupled to the interface circuit 12. The queue manager block 16 is coupled to the FIFO buffer 14 and takes appropriate action upon receipt of new frames to be transmitted. The calendars block 18 stores one or more schedules (also referred to as “time wheels”) which indicate an order in which flows are to be serviced. As is familiar to those who are skilled in the art, a “flow” is a logical connection between a source and a destination. Flows are sometimes referred to as virtual connections or virtual channels (VC's).
The winner block 20 is coupled to the calendars block 18 and selects flows to be serviced on the basis of information stored in the calendars block 18. The memory manager block 22 is coupled to the queue manager block 16 and the winner block 20, and handles storage and retrieval of data with respect to the external memories 24, 26. The external memories 24, 26 store data such as flow queues and flow queue control blocks (sometimes also referred to as flow control blocks or queue control blocks). Depending upon the number of flows to be handled by the scheduler circuit 10, the external memories 24, 26 may be dispensed with, and internal memory (not shown) associated with the memory manager block 22 may be used for storing flow queue control block information.
FIG. 2 is a schematic representation of a conventional flow queue. As is familiar to those who are skilled in the art, a flow queue includes a linked list (indicated at 28) which contains frame pointers indicative of data frames associated with a flow which have been received for transmission. The first frame pointer in the linked list 28 is referred to as the “head” of the queue and is indicated at 30. The last frame pointer in the linked list 28 is referred to as the “tail” of the queue as indicated at 32. Also associated with the flow queue is a header 34 that identifies the flow associated with the flow queue.
The flow queue control block, which is not shown, contains flow configuration information, such as the desired flow average rate or bandwidth, which may be based on a contracted Quality of Service (QoS) for the flow. The flow queue control block also contains flow run time information which is required by the scheduler circuit 10 to support the desired flow configuration.
FIG. 3 schematically represents a time wheel 35 of the type stored in the calendars block 18. The time wheel 35 is made up of a number of slots 36. Each slot corresponds to a present or future cycle in which a frame may be transmitted. A flow queue identifier (fqid) may be entered into a slot to indicate that the corresponding flow is to be serviced in the cycle represented by the slot. A “current time” (CT) pointer 38 is associated with the time wheel 35 and points to the slot of the time wheel 35 which represents the cycle which is currently to be serviced. A flow identified by a flow queue identifier in the slot pointed to by the CT pointer (in this particular example, slot number 2) is serviced by transmitting the head frame in the corresponding flow queue. The flow is then “reattached” to the time wheel 35 by entering the corresponding flow queue identifier in a later slot of the time wheel 35. The later slot is the one which corresponds to a “next service time” (NST). The NST is determined by adding to the current time (CT) a parameter known as a “sustained service distance” (SSD). The SSD is stored in the flow queue control block that corresponds to the flow in question, and reflects the QoS for the flow. In general, the higher the bandwidth or rate to which the flow is entitled, the shorter the SSD. In the particular example illustrated in FIG. 3, it is assumed that the SSD is “4”, so that the slot to which the flow is reattached is determined by adding the CT (having a value of “2”) with the SSD having a value of “4”, indicating that slot 6 is the slot to which the flow is to be reattached.
Contention for a time wheel slot is handled by conventional practices, such as “chaining” or queuing of contending flows within a time wheel slot.
Referring again to FIG. 1, incoming messages for the scheduler 10 are indicated at 40, and outgoing messages from the scheduler 10 are indicated at 42. The incoming messages indicated at 44, namely “CabRead.request” and “CabWrite.request” and the outgoing messages indicated at 46, namely “CabRead.response” and “CabWrite.response”, are concerned with configuring flows. The “PortStatus.request” message 48 informs the calendars block 18 when servicing of flows must be suspended to implement a “back pressure” arrangement. The concept of back pressure is familiar to those who are skilled in the art and need not be further discussed.
The incoming message indicated at 50, namely “FlowEnqueue.request” is a message indicating arrival of a new frame to be transmitted by the network processor, and to be scheduled by the scheduler circuit 10. The “FlowEnqueue.response” message indicated at 52 is an acknowledgment of the “FlowEnqueue.request” message 50 by the queue manager block 16.
The “PortEnqueue.request” message indicated at 54 is an instruction from the winner block 20 of the scheduler circuit 10 to the data flow circuit (not shown) to enqueue a particular data frame for transmission based on a flow queue identifier read from the CT (current time) slot of a time wheel in the calendars block 18.
In operation, FlowEnqueue.request messages 50 are received by the scheduler circuit 10 from time to time. Each FlowEnqueue.request message 50 points to a new frame that has arrived for a particular flow. In response to the FlowEnqueue.request message 50, the queue manager block 16 fetches the flow queue control block for the particular flow in question. The flow queue control block indicates the number of frames waiting in the flow queue. If the number is non-zero (i.e., the flow queue is not empty), then the newly arrived frame is simply added at the tail of the flow queue. If the flow queue is empty, a next service time (NST) parameter stored in the flow queue control block is compared with the current time (CT) for the time wheel in the calendars block 18 to determine whether the next service time for the flow in question has already occurred. If so, the newly arrived frame is immediately dispatched via a PortEnqueue.request message 54 issued by the winner circuit 20. If the next service time for the flow has not already occurred, the flow is attached to the time wheel at the indicated NST, and the frame is enqueued to the flow queue (thereby becoming both the head and the tail of the flow queue).
Servicing of flows by the scheduler 10, and in particular by the winner block 20, is as follows. The current time (CT) pointer advances to the next slot of the time wheel and a flow queue identifier is read from that slot. Then the flow queue control block for that flow is fetched. The winner block 20 then issues a PortEnqueue.request message 54 to cause the frame at the head of the flow queue for the flow in question to be enqueued for transmission by the data flow circuit (not shown). The winner block 20 also calculates a next service time (NST) as the sum of current time (CT) and the SSD parameter stored in the flow queue control block.
It is then determined whether the frame just enqueued for transmission was the last frame in the flow queue. If so, then the calculated value for NST is written to the flow queue control block. If not, the flow is reattached to the time wheel at the slot corresponding to the indicated NST.
It has been proposed to provide a “virtual path” feature in a network processor. A “virtual path” is a group of virtual channels that together share an assigned amount of bandwidth. According to a proposed manner of implementing a virtual path, a path control block is provided. The path control block points to a linked list of channel control blocks, each of which corresponds to an active virtual channel associated with the virtual path. A QoS parameter, such as an SSD, is stored in the path control block and reflects a bandwidth that is assigned to the virtual path and is to be shared by the active virtual channels associated with the virtual path.
No flow queue identifiers are attached to the time wheel for virtual channels associated with the virtual path. Instead, a path identifier which points to the path control block is attached to the time wheel. When the current time pointer points to the slot in which the path identifier is entered, the path control block is fetched. The path control block points to the first channel control block in a list of channel control blocks. The virtual channel which corresponds to the first control block in the list is serviced by enqueuing for transmission the first frame in the flow queue which corresponds to the virtual channel. The control block for that virtual channel is then placed at the end of the list, and the path control block is changed to point to the new head of the list of channel control blocks. The path identifier is reattached to the time wheel at a next service time (NST) that is calculated based on a QoS parameter (e.g., SSD) for the virtual path.
This proposed manner of implementing virtual paths has some disadvantages. For example, since the active virtual channels are serviced in a round robin fashion, all virtual channels are accorded an equal share of the path bandwidth, which prevents flexibility in assigning bandwidths to the virtual channels associated with the virtual path.