1. Field of the Invention
The present invention relates to a system for the regulation of information trains for a packet switch.
FIG. 1 shows the schematic diagram of a device for the regulation of information trains within a switch. A device of this kind has chiefly input modules, output modules and a routing module.
To simplify the description, only three input modules and three output modules, respectively referenced E1, E2, E3; S1, S2, S3 have been shown in this figure.
In general, these modules are each made in the form of an electronic card with a standard format.
Each input module receives an information train at its input port. This information train, which is at a high bit rate, may or may not be sporadic.
Each module, namely each input card or output card, has two ports, an input port and an output port. The routing module has at least as many input ports and output ports respectively as it has input cards and output cards respectively.
An input link of the switch forms an input port of an input card.
A port consists of a channel that takes several paths forming an interlacing of information trains, it being known that instantaneously there is only one cell on the channel. Information trains enter asynchronously and exit asynchronously.
After going through an input and output module (or card), the order of the packets in each information train must remain the same.
Reference may be made, for a clearer understanding of the description, to the diagram of FIG. 2, which shows cases of interlacing of information trains flowing through an input card or through an output card.
Indeed, this diagram illustrates the incoming of three information trains at the input port with the bit rate DE card and the exit of these three information trains at the output port with the bit rate DS of the card.
The digits 1 and 2 mark the order of the cells within one and the same train. It can be seen that the order of the packets in each information train is the same at output as it is at input.
With an information train there is associated a bit rate (this is the number of cells per second) which is also the bit rate of the path on which it is moving.
With a port there is associated a bit rate (the borderline value of the number of cells per second independently of the information trains to which these cells belong). The input bit rate DE of an input card is generally equal to the output bit rate DS of this card. It may also be lower.
It will be recalled that the term "active path" for a card is understood to mean a path for which there are cells in this card (this notion of active path is therefore internal to a switch).
The cells are placed on each output link according to a particular rule by distinguishing the paths to which they belong. This particular rule could be for example:
either in proportion to the bit rates of each pat h, PA1 or by the equitable sharing of the link between the active paths. PA1 a) in a first input module, one queue per path that passes into the module, PA1 b) in each output module, an arbitrator capable of transmitting packet transmission requests (DE), in the direction of the output module in which it is placed, according to a preset rule, to an input module.
Other rules may be envisaged.
It will also be recalled that it is possible for a communication (for information other than data trains) between switches to be set up in order to adjust the bit rate of the paths between switches.
It is then possible for a switch to send an indication to its upline switch instructing it to reduce the bit rates of the paths coming from the output cards of this upline switch.
An output of the upline switch is then associated with an input card of the switch considered.
Reference may be made to the diagram of FIG. 3 which illustrates this example.
A clearer understanding of the problem posed can be obtained by looking inside a switch. Tijk denotes the information train flowing on the path Tijk coming from the input module i, going towards the output module J, this information train having an index k among the trains having same input modules and same output modules.
The routing module is capable of processing the sum of the incoming bit rates. Its function is to send a data train from a port i to a port j in following the flow. It is possible that the information trains coming from different input ports will be routed to the same output module.
The constraints of processing time and of storage capacity of the input or output modules are such that the bit rate that a module can accept at input (DE) is limited.
The reference DEe will be applied to the input bit rate of an input module, DEs being that of an output module and DEa that of the routing device.
The reference DSe will be applied to the output bit rate of an input module, DSs to that of an output module and DSa to that of the routing device.
Hereinafter, a distinction shall be made between the case where the routing device is not limited in output bit rate and the case where its output ports and input ports have the same bit rate:
Case A
Even if the routing device were to bear the superimposition of the bit rates of each of the information trains that converge towards the same output port (in having the bit rate of each output port equal to the sum of the bit rates of its input ports: DSa=.SIGMA. DEa), the output card is limited by the bit rate DEs that it can accept, DEs&lt;&lt;.SIGMA. DEa; a bottleneck thus appears at the input of the output card.
Case B
When the routing card has input and output ports with the same bit rates (DSa=DEa), a buffer of limited size is associated with each output port with the aim of accepting the excess bit rate as compared with DSa, generated by the simultaneity of convergence of sporadic trains.
However, the risk remains that this limited memory capacity will be insufficient to collect the excess of bit rate as compared with DSa, whence the risk of uncontrollable losses in the buffer.
In this case, the problem is shifted from the input of the output card (case A) to the output of the routing device (case B). This is illustrated in the diagram of FIG. 5.
There is therefore a bottleneck. This bottleneck is set up by the routing device.
2. Description of the Prior Art
Hereinafter, the approaches that have been provided to resolve this problem shall be enumerated.
1) The first approach consists in placing a buffer at output of the routing device. Practically, this amounts to placing a large memory at each output port. The memory size is a function of the number of inputs and of the bit rate so as to buffer the excess bit rate at output of the routing device.
In the case A (defined here above), the memories are placed between the output port of the routing device and the output card as shown in FIG. 6.
In the case B, these memories are placed in the routing card at the output port as shown in FIG. 5.
The size of the buffer needed to collect the excess bit rate increases linearly according to the relationship: (n-1).times.D.times.T, with n as the number of inputs of the routing device, D as the bit rate of the input ports of the routing device and of the output port (case B) or of the input of the output card (case A), T the duration during which the simultaneity of convergence occurs.
For eight inputs at 155 M bits per second and 424-bits cells (giving 365 k cells per second) and information trains with a length of 200 cells (the duration of a train being 547 .mu.s), a buffer size of 1,400 cells is needed to collect the bit rate on the duration of a train (547 .mu.s), and increases linearly so long as this simultaneity of convergence continues (for example on several trains).
Thus, the memory sizes needed very soon become substantial, and there is no certainty that the memory size chosen will be sufficient to collect the excess bit rate in every possible example.
Now, having uncontrollable losses is intolerable.
Furthermore, in the case B, when the routing card is taken from among those available in the market, there is no possibility of modifying the memory size within the routing device. This routing device is fixed by the manufacturer (for example the Fujitsu MB86680A matrix has buffers at output with a capacity of 75 cells).
2) The second approach consists in placing a buffer at the input of the routing device and using a resource reservation mechanism.
A buffer is placed at each input of the routing device. The input cards make use of a central unit which has knowledge of all the needs of the switch in terms of bit rate. This central unit adjusts the output bit rates of the buffers in such a way that there is no overflow at output of the routing device. This approach is illustrated in FIG. 7.
The maximum buffer size is no longer: D.times.T with D as the bit rate from the input port of the input card, T the duration during which the central unit no longer allows this card to transmit (this is the worst case, the central unit having to see to it that the outflow from the buffers takes place as efficiently as possible).
To implement this approach it will be necessary however to add rhythm generators to the buffers in order to carry out the commands of the central unit.
Furthermore, the centralization of the decisions in a single unit is dangerous (if the central unit is defective, the overall operation of the switch is affected).
Furthermore, in the event of a reduction of the bit rate of the buffer owing to the paths leading towards output cards on which there is a simultaneity of convergence of the trains, there is a risk that this system might have a harmful effect on the paths that are borne by the same input cards that go towards the output cards for which there is no simultaneity of convergence.
In order to resolve the latter problem, one approach consists in placing several buffers on each input of the routing device; as many buffers are positioned on each input as there are possible output directions.
A central unit adjusts the output bit rates of the buffers in such a way that there is no overflow at output of the routing device. This is shown in the drawing of FIG. 8.
The drawback of this approach is still the fact that it is necessary to add rhythm generators to carry out the commands of the central unit, as well as the fact that the decisions are centralized in a single unit.
This is a complicated approach to set up and control.
Furthermore, the central unit does not take account of the paths individually and, if it is desired to place the cells at each output link according to a particular rule in distinguishing the paths to which they belong, it is necessary to have a device placed in each output card that arbitrates each of the paths borne by the card. To this arbitration mechanism there is then added the mechanism that regulates the output bit rates of the buffer at input of the routing device.
3) A third and final approach consists in placing a buffer at output of the input cards and in using a credit mechanism.
Buffers are placed on each output of the input card. As many buffers are placed on each output as there are paths borne by the input card.
FIG. 9 illustrates the credit mechanism for each path, between output cards and input cards. This figure shows only the buffer and the credit register associated with a particular path in an input card (in fact there are as many buffers and credit registers as there are paths borne by an input card).
According to this approach, an input card may send out a packet (or cell) belonging to a connection when the credit for this path is not zero.
After the sending of a cell by the input card, the credit value of the path is decremented by 1. After the sending of a cell for this path by the output card, the credit value of the path is incremented by 1.
Thus, each leap through the matrix for a path is managed by a credit algorithm.
One of the useful points here is that there are no additional rhythm generators in the input cards. These cards transmit only when credits are available.
Another useful feature is the independence of the functioning of the different cards. There is no centralized management.
This problem does not completely resolve the problem of jamming or blocking at the output of the routing device. Indeed, should many credits be concentrated in the input cards (this situation may occur when starting but also during normal operation) and should several cards decide simultaneously to use their credits for paths going towards one and the same output card, there will be an overflow of the buffers of the matrix in the case B or saturation of the input of the output card in the case A.
In this case, the excess bit rate which will then have to be collected in the output buffers of the routing device (cf. first approach) is: (n-1).times.D with n as the number of inputs that transmit simultaneously, D as the bit rate of the input ports of the routing device and of the output port (case B) or of the input of the output card (case A).
If T is the duration in which the simultaneity of convergence on the output card occurs: T=c.times.V.times.1/D, with c as the number of credits, V as the number of paths borne by an input card and going to the same output card (taking the case where the credits within one and the same input card, for the paths going to one and the same output cards, are used successively).
Thus, the memory size needed to collect the excess bit rate is: EQU (n-1).times.D.times.c.times.V.times.1/D=(n-1).times.c.times.V,
which represents, for c=3, n=8 and V=300, a considerable size of 7,200 cells (which, especially with respect to the case B, is far greater than the 75 cells provided for in a commercially available routing device).
The size of 7,200 cells corresponds to the result obtained from the relationship defined in the first approach for a period of time T needed to let out 300 trains of 3 cells (3 being the credit value, and not 200 cells).
In the prior art approaches that have been identified, it can be seen that the risks of blocking persist despite considerable memory size placed at output of the routing device (first approach) or smaller-sized buffers at input of the routing device associated with a credit mechanism (third approach).
When the problem of the blocking is resolved, whatever may be the example (second approach), there then arises the question of the control of a highly complex system as well as that of the risk involved in the centralizing of decisions. Furthermore, the latter approach requires an additional arbitration mechanism.