1. Field of the Invention
The present invention relates to a network device in a packet switched network and more particularly to a method of dynamically sharing a memory location across all of the ports associated with the network device without a total bandwidth exceeding that of a system clock.
2. Description of the Related Art
A packet switched network may include one or more network devices, such as a Ethernet switching chip, each of which includes several modules that are used to process information that is transmitted through the device. Specifically, the device includes an ingress module, a Memory Management Unit (MMU) and an egress module. The ingress module includes switching functionality for determining to which destination port a packet should be directed. The MMU is used for storing packet information and performing resource checks. The egress module is used for performing packet modification and for transmitting the packet to at least one appropriate destination port. One of the ports on the device may be a CPU port that enables the device to send and receive information to and from external switching/routing control entities or CPUs.
As packets enter the device from multiple ports, they are forwarded to the ingress module where switching and other processing are performed on the packets. Thereafter, the packets are transmitted to one or more destination ports through the MMU and the egress module. The MMU enables sharing of packet buffer among different ports while providing resource guarantees for every ingress port, egress port and class of service queue. According to a current switching system architecture, eight class of service queues are associated with each port. To ensure bandwidth guarantees across the ports and queues, the device allocates a fixed portion of the memory for the port to each queue. As such, a queue that is associated with a class of service with a high priority may be assigned a greater fixed portion than a queue that is associated with a lower priority class of service. This implementation is inflexible and does not account for dynamic requirements that may be associated with one or more queues.
A more flexible approach defines a guaranteed fixed allocation of memory for each class of service queue by specifying how many buffer entries should be reserved for an associated queue. For example, if 100 bytes of memory are assigned to a port, the first four classes of service queues initially may be assigned the value of 10 bytes and the last four queues initially may be assigned the value of 5 bytes. Even if a queue does not use up all of the initially reserved entries, the unused buffers may not be assigned to another queue. Nevertheless, the remaining unassigned 40 bytes of memory for the port may be shared among all of the class of service queues associated with the port. Limits on how much of the shared pool of the memory may be consumed by a particular class of service queue are set by a limit threshold. As such, the limit threshold may be used to define the maximum number of buffers that can be used by one queue and to prevent one queue from using all of the available memory buffers. To ensure that the sum of initial assigned memory values do not add up to more than the total number of available memory for the port and to ensure that each class of service queue has access to its initially assigned quota of memory, the available pool of memory for each port is tracked using a port dynamic count register, wherein the dynamic count register keeps track of the number of available shared memory for the port. The initial value of the dynamic count register is the total number of memory associated with the port minus a sum of the initial assigned memory buffers. The dynamic count register is decremented when a class of service queue uses an available memory after the class of service queue has exceeded its initially assigned quota. Conversely, the dynamic count register is incremented when a class of service queue releases a memory after the class of service queue has exceeded its quota as initially assigned.
In a current device, the total of 56 K entries of memory is shared among all ports and all class of service queues. In a worst case scenario, all ports may multicast 64 bytes multicast packet to all other port, including the sending port. Therefore, for each 1G port, the maximum ingress data packet rate is 1.4881 mega packet per second (Mpps) since (1Gbps/((64 byte+12 byte+8 byte)*8bits/byte)) is equal to 1.4881M˜1.5M, wherein 12 bytes are used for an Inter Packet Gap and 8 bytes are used for a preamble. As such, each port will receive 36.75 Mpps˜36.8 Mpps. In a device where there are 14 ports, the aggregate bandwidth requirement is 36.75 *14 or 514.4 MHz. This bandwidth requirement is three times faster than a typical system clock of 156 MHz. As such, the device will be unable to support such high bandwidth demand.