The integrated services digital network (ISDN) and its associated message oriented signaling protocols place new and demanding requirements on core switching technologies. In order to meet the ISDN specification, an efficient method of interconnecting a large number of high performance processors in a distributed processing system must be provided.
Previously developed systems suffered from a variety of inadequacies. First, the number of simultaneous connections between processing elements is limited, and the rate at which connections may be achieved is unsuitable, particularly for future applications. Preferably, up to thirty-two simultaneous connections would be supported, with a maximum connection rate approaching 2.5 million connections per second.
Further, present day systems do not offer flexibility in optimizing the cost of the system versus the failsafe mechanisms for overcoming partial system failures. Because the amount of system redundancy will vary depending on the application in which the switching network is used, it is beneficial that the switching network allow for selective redundancy for providing high availability in the most important subsystems.
Additionally, present day switching networks do not allow for cost effective growth capabilities, but rather, force the user to buy a new switching network to accommodate system expansion. Thus, present day switching networks do not offer both short and long term cost effectiveness for growing companies.
Another important aspect of switching networks is their ability to isolate and detect faults. In the telecommunications area, it is important to detect and isolate errors so that faulty data does not propagate through to other processing systems.
Thus, a need has arisen for a switching network capable of a large number of simultaneous connections and a fast connection rate, which offers high system availability and growth possibilities. Furthermore, the switching network system should have an effective means for detecting and isolating system faults and taking corrective measures. SUMMARY OF THE INVENTION
For a more complete understanding of the present invention, a switching network method and apparatus is provided which substantially eliminates or prevents the disadvantages and problems associated with prior switching networks.
The switching network of the present invention selectively creates multiple paths between processing nodes to which a processor or a plurality of processors is attached. The switching network includes a gateway subsystem attached to each processing node. The gateway subsystems may be in either an "originator" or a "server" mode. The gateway is in an originator mode when, the processor to which it is attached is requesting service. The gateway is in an server mode when it is attached to a processor to which a connection has been requested by an originator gateway. Originator gateways forward requests for a connection (request for service) from a processor to which it is connected to a transport group controller subsystem which is connected to a predetermined group of gateway subsystems. The server gateway receives requests for service from its transport group controller, and initiates path requests upon receiving a service request. The server gateway also issues release requests in response to a release command from the associated processing node once a connection has been set up and the desired data transfer has terminated. Each gateway is responsible for buffering service requests from the processing node to which it is attached and from other originator gateways.
The transport group controller acts as an interface between a group of gateways to which it is attached and other subsystems in the switching network. The transport group controller acts as a funnel for requests issued by the gateways such that only one request from the gateway group associated with the transport group controller is forwarded to each of three dedicated buses during a request cycle. Since service, release and path request each have a dedicated bus, the TGC may forward one of each request type from its associated gateways during a given cycle.
A service request distributor accepts service requests from any of the transport group controllers and reformats the request and transmits it to the server TGC for transfer to the server gateway. A transport interchange supervisor subsystem receives path requests from the server gate via its associated transport group controller subsystem. The transport interchange supervisor is responsible for maintaining a record of the status (busy or idle) of all the gateways. If either the originating gateway or serving gateway are busy, the transport interchange supervisor subsystem initiates a negative acknowledgment to the serving and originating gateways and the serving gateway places the path request at the bottom of its path request fifo for later execution. On the other hand, if both the originating and serving gateway are idle, the transport interchange supervisor subsystem updates the status of the originating gateway and serving gateway as busy and sets up a two way connection between the two gateways in a transport interchange subsystem. The transport interchange supervisor subsystem subsequently initiates acknowledge signals to both the originator and server gateways.
Once a connection between gateways through the transport interchange subsystem is established, the processors may communicate through the connection. Once the communication is completed, the serving processor initiates a release request through the server gateway and server transport group controller to the transport interchange supervisor. In response, the transport interchange supervisor updates the status of the originator and server gateways as being idle.
In another embodiment of the present invention, a plurality of low speed processors are attached to a transport node controller. The transport node controller communicates with a number of processors in order to allow a single processing node to support more than one processor. The transport node controller may provide a path between processors associated with it.
In yet another embodiment, the switching network includes a transport maintenance controller which oversees the integrity of the data communicated through the switching network. The transport maintenance controller operates independently of paths used for creating connections, thereby maintaining the switching network without interfering with the speed of which connections are formed. Each subsystem contains maintenance buffers through which the necessary information is communicated.
In yet a further embodiment of the present invention, a system of "timing islands" is provided such that high speed data transfer can be reliably effectuated. The timing islands provide levels at which the clocks are synchronized with the data, in order to prevent skewing between the timing used by various subsystems.