FIGS. 1A and 1B are a block diagram illustration of a conventional network processing chip 10 in which the present invention may be applied. The network processing chip 10 operates in accordance with the well-known ATM (Asynchronous Transfer Mode) protocol. Components making up the network processing chip 10 will now be briefly described.
The network processing chip 10 includes an internal memory bus 12 and a PCI interface bus 14.
An interrupt status block 16 is provided to indicate interrupts to a host processor (not shown) which is external to the network processing chip 10. A control memory controller 18 controls an external RAM (not shown) that contains control structures utilized by the network processing chip 10. A virtual memory block 20 (VIMEM) maps virtual memory to physical memory. A packet memory controller 22 controls an external memory (not shown) which stores data received and to be transmitted by the network processing chip 10.
A physical bus interface (NPBUS) block 24 provides an interface to an EPROM (not shown) that may be used to initialize the network processing chip 10. A PCI bus interface (PCINT) block 26 provides an interface between the internal PCI interface bus 14 and an external PCI bus (not shown) which may connect the network processing chip 10 to an external host device. An internal memory arbiter (ARBIT) block 28 arbitrates access to and from the internal memory bus 12. A test/clock block 30 (CRSET, CBIST, SCLCK, CITAG) represents internal test circuitry for the network processing chip 10.
A processor core (PCORE) block 32 represents a processor built into the network processing chip 10. A memory pool control (POOLS) block 34 controls allocation of memory space and prevents any group of components of the network processing chip 10 from overutilizing memory.
A block 36 (DMA queues) represents direct memory access queues (DMAQS) used for transferring data between the internal memory bus 12 and the PCI interface bus 14.
A cell scheduler (CSKED) block 38 determines an order in which cells are transmitted by the network processing chip 10. (The present invention is concerned with a modification of the cell scheduler block 38. A further description of certain conventional functions of the cell scheduler block 38 is provided below.)
A general purpose DMA (GPDMA) block 40 transfers data between the internal memory bus 12 and the PCI interface bus 14. A cell segmentation block 42 formats data for transmission (that is, the cell segmentation (SEGBUG) block 42 builds ATM cells). A cell reassembly (REASM) block 44 receives ATM cells and packs (reassembles) frames from the received ATM cells. An asynchronous cell interface (LINKC) block 46 provides an interface to physical ports 48 (e.g., a physical interface having ports 0–3).
A bus cache (BCACHE) 50 caches data carried on the internal memory bus 12 to speed access to the data. A receive queue management (RXQUE) block 52 queues incoming frames for storage in memory. A checksum (CHKSM) block 54 performs checksum calculations with respect to incoming data frames. A SONET (synchronous optical network) framer (FRAMR) 56 provides an interface to an optical serial link (not shown). Some details of the operation of the cell scheduler block 38 will now be described with reference to FIG. 2.
Data structures utilized in scheduling cells for transmission include a schedule data structure (also referred to as a “time wheel RAM”) 60, a logical path descriptor (LPD) 62 and logical channel descriptors (LCD's) 64-1 through 64-3. Each LCD corresponds to a respective virtual channel. As is understood by those who are skilled in the art, a “virtual channel” is an arrangement for transmitting data cells from a source to a destination. The LPD 62 corresponds to a “virtual path”. A virtual path is a group of virtual channels that together share an assigned amount of bandwidth. The virtual channels active in the virtual path represented by the LPD 62 are represented by a linked list 66 of LCD's 64-1 through 64-3. (Although three LCD's are shown in the linked list 66, corresponding to three virtual channels active in the virtual path represented by LPD 62, the number of active virtual channels, and, correspondingly, the number of LCD's in the linked list 66, may be more or less than three.) The schedule data structure 60 includes a plurality of entries (of which, for clarity, only one entry 68 is shown in the drawing). Each entry corresponds to a time at which a data cell is to be transmitted by the network processor 10. The entries are read in order as indicated by arrow 70. At a time when a data cell is to be transmitted, the entry of the schedule data structure 60 corresponding to that time is read. The entry in question is a pointer to either an LPD (as in the case of entry 68) or to a LCD (corresponding to a virtual channel that is not part of a virtual path).
Let it be assumed that the time has come to read entry 68 in the schedule data structure 60. The pointer in the entry 68 points to LPD 62. The LPD 62 is then accessed from memory. The LPD 62 contains scheduling information 72 for the associated virtual path. Also contained in the LPD 62 are a head pointer 74 which points to the first LCD (LCD 64-1) of the linked list 66 and a tail pointer 76 which points to the last LCD (LCD 64-3) of the linked list 66. After the LPD 64 is accessed, the head pointer 74 is read, and the indicated LCD (LCD 64-1) at the head of the linked list 66 is accessed. The LCD 64-1 contains all the information required to transmit a data cell on the associated virtual channel. A data cell for the virtual channel associated with the LCD 64-1 is accordingly transmitted in the transmit cycle corresponding to the entry 68 of the schedule data structure 60.
The LCD 64-1 is then moved to the end of the linked list 66, by updating the head pointer 74 of the LPD 62 to point to the next LCD 64-2. The tail pointer 76 of the LPD 62 is also updated to point to the LCD 64-1. In addition, the erstwhile tail LCD 64-3 is updated to point to the new tail LCD 64-1. The pointer for the LPD 62 is then written into a new slot on the schedule data structure 60 in accordance with the scheduling information 72 for the associated virtual path. That is, the rescheduling of the next cell to be transmitted for the virtual path is determined based on the contracted-for Quality of Service (QoS) for the virtual path.
The LPD 62 also includes a timestamp (not separately shown) which is indicative of the time at which the next cell for the virtual path is to be transmitted in accordance with the QoS guarantee for the virtual path. When a cell is delayed, such that the difference between the timestamp and the actual time becomes large enough, cells are transmitted for the virtual path at the peak cell rate until the difference is reduced. The difference is known as a QoS credit. Thus the timestamp also is written into the LPD upon transmission of a cell for the virtual path.
By way of contrast, for transmission of a cell for a virtual channel that is not part of a virtual path, the entry corresponding to the virtual channel (i.e., a pointer to the corresponding LCD) is read from the schedule data structure 60 and the corresponding LCD is accessed from memory. A data cell for the associated virtual channel is transmitted based on the parameters in the LCD and the next cell to be transmitted for the associated virtual channel is scheduled by placing the pointer to the LCD in an appropriate slot in the schedule data structure 60 based on scheduling information contained in the LCD.
It is noted that transmission of a cell for a virtual channel included in a virtual path entails approximately twice the control bandwidth as transmission of a cell for a virtual channel that is not part of a virtual path. This is because transmission of a cell in the case of the virtual channel that is part of a virtual path entails accessing two data structures, namely an LPD and an LCD, whereas, when the virtual channel is not part of a virtual path, only an LCD must be accessed. In practice it has been found that the increased control bandwidth for the virtual path feature may lead to a reduction in performance (throughput) for the network processor 10 when the virtual path feature is implemented.
It would be desirable to reduce the control overhead required for the virtual path feature so that the performance of the network processor 10 is improved during implementation of the virtual path feature.