A computer network is a collection of interconnected computing devices that can exchange data and share resources. Often, in highly populated areas, the computer network includes an optical fiber backbone to facilitate the transfer of large amounts of data between the computing devices. Optical fiber backbones are preferred because data can be exchanged over the optical fiber at higher speeds with reduced attenuation when compared to conventional wire or cable links. In some configurations, an optical fiber backbone may be laid in the shape of a ring because a ring offers generous geographical coverage and reasonable resiliency. When shaped in a ring, the optical fiber network is referred to as a “ring network.”
Certain network devices, referred to as “switches,” provide access to the ring network. Computing devices couple to the switches to gain access to the ring network and thereby interconnect with other computing devices coupled to the fiber ring network. One of the switches may provide access to a public network, such as the Internet, or another private network, and this device is typically referred to as a “hub.” Via the hub, the computing devices may utilize the ring network to access the public or adjacent network.
While providing high transfer speeds, generous geographical coverage, and reasonable resilience, fiber ring networks often fail to treat data fairly as the data traverses around the ring, especially when the ring network comprises a packet-based ring that conveys information via packets instead of conventional multiplexed signals. In a packet-based ring network, each switch typically includes a single queue to store packets destined to traverse the ring. At a given switch, the queue, therefore, stores both packets originating from computing devices directly coupled to the switch as well as transit packets already traversing the ring without taking into consideration the position of each switch around the ring. Transit packets destined for a given hub but injected into the ring by switches distant from the hub are successively queued by each intermediary switch on the way to the destination hub, which may result in significant delays. As a result, those switches closer to the hub receive preferential access to the hub because packets injected by these switches experience fewer delays as the packet traverse fewer intermediary switches on the way to the hub. The successive delays may result in violation of agreed upon quality of service for end users of the ring.
Certain techniques, such as those employed by resilient packet rings (RPR), have been proposed as an attempt to correct this preferential treatment. In RPR, each switch includes a first queue to store packets entering the ring network at that switch and a second queue to store transit packets already present on or traversing the ring. The additional queue enables switches to execute more complicated packet scheduling algorithms to account for the preferential treatment to those packets entering the ring. Those packets already traversing the ring, however, still receive unfair treatment because, again, the switches do not take into consideration the position of each switch around the ring. As a result, delays occur even when implementing these proposed techniques, which, as above, often lead to violations of quality of service agreements with the end user.