1. Field of the Invention
The present invention relates to the field of computer networking, specifically to the field of data communications in a private or public packet networks. More specifically, the present invention relates to a method and apparatus for transmitting cells through switch from a source to a destination.
2. Description of Related Art
Asynchronous Transfer Mode and other packet networks are characterized by high-speed switches that act to switch data cells of a fixed size and format through the network. Typically, ATM networks communicate using data cells that are relatively short fixed length packets of data that carry voice, video, and computer data across networks at high-speeds relative to the speeds of traditional data networks such as Ethernet, Token Ring, and Fiber Distributed Data Interface (FDDI) Networks. A typical ATM cell is 53 bytes long wherein 5 bytes are for header information and 48 bytes are for data. Often times, however, variations of the same are provided wherein the cell length is modified for different reasons. For example, in one fixed length packet network, a 4-byte header is used and 76 bytes of data are used. This particular cell size is advantageous in that it
Traditional local area networks (LANs) operate over shared media. All network devices on a particular network segment must share media with each other so that each device is provided with only a fraction of the total bandwidth of the media. Newer generation intelligent hubs support multiple segments of different types of LANs across their back planes to permit LANs to be segmented so that each network device is provided with greater bandwidth. Such hubs provide for a dedicated LAN interface so that, for example, in the case of an Ethernet LAN, a single network device is provided with the full 10 Mb/s bandwidth of the LAN segment. Each port on the hub is connected internally within the hub; typically, by either a high-speed bus or a cross connect.
Such hubs may be known as switching hubs. Generally a switching hub acts to concentrate wiring for a communications network in a central location such as a facilities telephone-wiring closet. The hub comprises a cabinet having multiple ports wherein each port supports one LAN segment. Each area local area network may support multiple network devices such as an end user system that may communicate over the local area network.
Such hub architectures are limited in that they can not scale to the high bandwidths required for integrated networks that transmit real time voice, video, and data. Fixed length packet networks, however, are capable of providing the bandwidth and throughput required for such applications. Such networks are capable of transmitting integrated voice, video, and data traffic because, as described above, they use small fixed size cells. By transmitting small fixed sized cells, these networks overcome delays associated with transmitting relatively large, variable length packets as experienced in traditional data networks. Accordingly, fixed packet length networks greatly increase transmission efficiencies.
Standards have been adopted for ATM networks, for example, by the international telegraph and telephone consultive committee (CCITT). The ATM forum, a group of telecommunications and data networking companies formed to insure the interoperability of public and private ATM implementations facilitates, clarifies, and adopts ATM standards.
The ATM standards are defined with respect to a user-to-network interface (UNI) and a network-to-network interface (NNI). Thus, UNI refers to an interface between a network device such an end user system and an ATM switch. ATM switches transmit information in fixed sized cells, which are formed of well defined, and size limited header areas and user information areas as described before. ATM switches utilize a variety of switching architectures, including, for example, matrix switching architectures back plane bus architectures, and other architectures. The two primary tasks that are generally accomplished by an ATM switch include the translation of path information and the transport of ATM cells of an input port to an output port.
A switch is typically constructed of a plurality of switching elements that act together to transport a cell from the input switch to the correct output. Various types of switching elements are well known such as the matrix switching element and the back plane bus switching elements mentioned before. Each is well known to those of the ordinary skill in the art and each carry out the two above-mentioned tasks.
In a traditional fixed length packet switch, a data packet is received from an external network, e.g. an Ethernet switch, where it is temporarily stored in a buffer. After a plurality of packets have been received, the packets are formed into a cell and then transmitted from an input stage to an intermediate stage where a plurality of cells are combined to create a stream of cells that are transmitted to the switching fabric itself. The switching fabric then receives the stream of cells, stores it in a temporary buffer, for example, an SRAM buffer and then transmits it back out from the switching fabric to the intermediate stage where the stream of cells is broken back into individual cells. The individual cells are transmitted to the output end where they are converted back to packets and then transmitted to the appropriate output location. In a conventional system, a cell that is to be transmitted in a multicast format is transmitted once for each destination to which it is to be transmitted. Accordingly, system efficiencies are lost because system resources are used to continuously transmit the same cell of data. Additionally, even with the above described inefficiencies, the throughput capacities of the receive devices or systems on a high speed bus cannot process receive data at the rate at which the data is being transmitted to the device. Accordingly, the receive buffers are not emptied fast enough. Thus, when the receive buffers are not emptied at a fast enough rate, a type of blocking known as head of line blocking occurs.
Another problem that is encountered during switch fabric operations is that of congestion within the switch fabric. Traditionally, when congestion occurs, and there is no room for additional data, groups of cells are discarded and are not transmitted to their destination. For example, if a group of cells includes data that causes congestion for only one destination, the whole group of cells may still be discarded. What is needed, therefore, is a system and apparatus that intelligently controls data transmissions in light of detected congestion conditions to minimize the occurrence and quantity of data that is discarded due to congestion.
The present apparatus and method of use comprises a system that solves the aforementioned problems by efficiently transmitting data that is to be delivered to a plurality of destinations in a manner that also reduces head of line blocking and that reduces congestion or the negative effects of congestion including the discarding of data. According to an exemplary embodiment of the invention, a multicast cell is stored in a separate buffer that is only for buffering multicast cells. As multicast cells typically comprise about twenty percent of high speed data bus traffic, separating the multicast cells increases the amount of unicast data that can be transmitted to a particular device by twenty five percent thereby significantly reducing head of line blocking. In general, the method includes buffering the multicast messages that are to be delivered to a plurality of destinations in separate memory buffers at both the transmit and receive ends of the high speed data bus. A multiplex device receives a cell and determines that it is a multicast cell and stores it in the corresponding buffer for multicast cells. The multiplex device determines the cell is a multicast cell by examining the logical value of a bit within the header of the cell that defines whether the included cell is a multicast cell or a unicast cell.
If the cell is a multicast cell, the invention not only includes storing the cell in a specified buffer for multicast cells, but also includes examining the contents of an address field within the cell header. The address within the cell header is used as an index to a table stored in memory wherein the index specifies a unique combination of destination devices that are to receive the cell and then the addresses. Thus, the invention includes a method and apparatus for storing unique combinations of destination addresses in relation to address indexes stored within the header portion of the multicast cells.
More specifically, the fabric access devices are formed to have three groups of buffers. The first group of buffers is a set of buffers for receiving data over a high speed data bus. A second set of buffers is for temporarily holding the so called unicast cells that are to be transmitted to only one device. The third and final set of buffers are for holding the so called multicast cells that are to be transmitted to a plurality of devices.
Similarly, a multiplex device that transmits and receives cells to and from the fabric access devices includes three sets of buffers as well. A first set of buffers is for holding cells that are to be transmitted to the receive buffers of the fabric access device while the second and third set of cells are for receiving the unicast and multicast cells from the fabric access device. In the described embodiment of the invention, the unicast and multicast cells are transmitted over the same line or bus. Accordingly, the multiplex device includes a parsing unit that examines a field within the header portion of each of the received cells to determine whether the received cells are unicast or multicast cells. If a cell is a unicast cell, it is temporarily stored within the unicast receive buffer set. If the cell is a multicast cell, then it is temporarily stored in the multicast buffer set.
In addition to comprising a unique buffer structure in both the fabric access devices and the multiplex devices, the invention includes having a memory whose contents include a table that is used for addressing purposes for the multicast cells. This determines the unique set of destination devices and their respective addresses that are to receive the multicast cell. Accordingly, the multiplex device determines the addresses for the destination devices and distributes the multicast cell accordingly. Thus, the invention not only includes a novel buffer configuration, but also includes transmitting multicast cells with an index value of a mapped table that, in conjunction with the novel buffer configuration, enables a system to transmit a multicast message only once over a high speed data bus thereby improving system efficiencies and increasing system throughput capacity. The inventive system also includes transmitting data packets or cells through the switch fabric of the above-described system in a method that accounts for congestion. More specifically, a switch controller continuously monitors the memory that is used to temporarily hold data that is being transmitted through the switch fabric to determine the amount of memory being consumed by each of the potential destination devices that are coupled to transmit and receive data through the switch fabric. More specifically, in the described embodiments of the invention, each device that is coupled to transmit and receive data through the switch fabric is allocated a specified amount of memory for temporarily holding data packets or cells that are being transmitted through the switch fabric. Accordingly, congestion occurs when the amount of memory allocated to a device can not hold data being received therefor. Thus the invention includes a method and apparatus to minimize the occurrence of such congestion.
Accordingly, the switch processor continuously receives memory status for each of the memory areas that are allocated to each device that is coupled to transmit and receive data through the switch fabric. Thus, the switch processor examines the received status and assigns a congestion rating for each allocated memory area. The assigned congestion rating is then transmitted to each external device that is coupled to transmit and receive data packets or cells through the switch fabric. In the described embodiments of the invention, one of four different congestion ratings is assigned by the switch processor to each memory area that corresponds to each of the external devices that are coupled to transmit and receive data packets or cells through the switch fabric.