The function of a switch in a packet switched network is to collect all the incoming packets of data and to switch them to the correct output pods of the switch. Since it is possible that at any particular moment packets arriving at two or more input ports are directed to the same output port, it is necessary to provide a buffer memory within each switch to temporarily store packets while the output pods to which they are sent are being used to transmit other packets. There are many ways known in the prior art of providing such memories. The most basic of these is to provide a separate first-in first-out (FIFO}memory at each output port. In this design, any packets directed to the output port are first placed in the FIFO queue where they are held until all previously arriving packets have been output. Such a system is, however, very wasteful of memory space since it requires the provision of a fixed memory size at each output pod. Thus the FIFO queue at one heavily used output pod could easily become full and unable to store any more packets while queues at adjacent, less heavily used, output pods still have lots of space available.
One means of overcoming this inefficient use of memory space is to use a single shared random access memory (RAM). Instead of using fixed partitions to divide the memory up into areas reserved for use by each output pod, the partition boundaries are made flexible so that FIFO queues requiring extra memory space are allowed to make use of unused memory space throughout the RAM. While this improves the throughput of packets, as fewer packets are lost merely because the output queue at one heavily used output pod is full, fairly sophisticated memory management techniques are required to reorganize fragmented memory after some period of operation.
In the IBM Technical Disclosure Bulletin, vol 32, no 3B, August, 1989, pp 488-492, an article entitled `Algorithm for managing multiple First-in First-out queues from a single shared random access memory` describes an algorithm which avoids the `garbage-collection` operation of reorganizing the fragmented memory required by the technique described above. In this disclosure, a first RAM is used to store all the packets and a second RAM is used to store a pointer which indicates the locations of packets in the output queue. The system works by having a register at every output pod to indicate the address in the first RAM from which the next packet is to be read. This register is updated while the packet is being transmitted by reading, from the second RAM at the same address as the address at which the packet was stored in the first RAM, the address at which the next packet in the output queue is stored. While this memory arrangement offers advantages over previously known systems, the provision of the second memory requires extra hardware overhead. In addition, since the size of the first RAM will be fixed by design considerations, expansion of the system by adding new output pods and input pods will only be possible if one is prepared to accept the accompanying possible extra data loss, as the memory itself cannot be increased in size.