ATM is a switching and multiplexing technique designed for transmitting digital information, such as data, video, and voice, at high speed, with low delay, over a telecommunications network. In ATM networks connections or “calls” must be established between one information device such as a computer system or router and another. This call or connection is sometimes referred to as a “virtual connection” (VC) particularly where a specified data pipe is artificially, through software, segmented into separate data-pathways, each pathway servicing a particular VC. The ATM network includes a number of switching nodes coupled through communication links. Often a switch acts as an intermediary to direct one or more of these VCs through a particular network node. FIG. 1 is a block diagram of a portion of such a telecommunications network. The network 100, shown in FIG. 1 includes a switch 105. The switch contains lines cards 110 that typically have thousands of VCs that transmit user data, though at a given time only a small fraction of the VCs may be transmitting data. The incoming lines for the user data typically may be T1 lines with a capacity of 1.54 mbps. The switch also contains trunk cards 115 connected to outgoing trunk lines that typically may be OC3 lines with a capacity of 155 mbps. A number of incoming lines will go out of one trunk card (e.g, trunk card 120). The data is then directed to various nodes in the network 125a–125e. 
A calendaring scheme is typically used to determine which of the hundreds or thousands of active VCs will have access to the available bandwidth and be able to transmit data. The scheme depends on such factors as the number of active VCs, the bandwidth a particular VC requires, and the quality of service a particular user is entitled to. FIG. 2 depicts a simplified calendaring scheme for five VCs A through E. Typically each calendar time slot may have several VCs to be processed at that time. Each VC has a rate at which it is supposed to send data. The calendaring scheme shown in FIG. 2 includes a calendar 205 showing the calendaring of VCs A through E. As shown, VC A will be processed at time equal to looms. This means that ASIC hardware will put a cell from VC A in the ready queue 210 where it will be transmitted in approximately a millisecond. In practice several cells from several VCs are queued at each time slot as noted above. At time equal to 100 ms, a cell from VC A1 is in trunk queue 215, while cells from VCs A2–A3 are in ready queue 210. The data in the ready queue 210 is forwarded to the trunk queue 215, which may be only one cell deep, and is then transmitted over the trunk. Depending on current usage and memory use as well as quality of service, VC A (i.e., VCs processed at time equals 100 ms) will then be placed down the calendar for future processing. This placement is determined in parallel with the processing and is typically accomplished within nanoseconds. In the example shown in FIG. 2, VC A is scheduled for processing every 700 ms. This means that VC A will not be processed again until time equals 800 ms. VCs B through E are scheduled similarly. New VCs are placed in the calendar scheme when they become active. Once they are opened they are placed in the calendaring scheme at a certain time and then get processed at a promised frequency. This calendaring is done dynamically through use of an algorithm implemented by the ASIC hardware. The calendaring is typically accomplished within one clock cycle, which doesn't leave time for sorting or calculating. There is no ability or desire to fill every time period. The algorithm places the VCs to ensure fairness. The result is that there are sporadic empty time periods in which no data is being forwarded to the trunk queue for transmission. As shown in FIG. 2, calendar 205 does not have a VC scheduled for time periods 600 ms, 700 ms, 900 ms, 1200 ms, 1400 ms, 1600 ms, or 1800 ms. The system takes just as long to process an empty time period and therefore this time is wasted.