The present invention is related to the field of communications, and more particularly to integrated circuits that process communication packets.
Many communication systems transfer information in streams of packets. In general, each packet contains a header and a payload. The header contains control information, such as addressing or channel information, that indicates how the packet should be handled. The payload contains the information that is being transferred. Some examples of the types of packets used in communication systems include, Asynchronous Transfer Mode (ATM) cells, Internet Protocol (IP) packets, frame relay packets, Ethernet packets, or some other packet-like information block. As used herein, the term xe2x80x9cpacketxe2x80x9d is intended to include packet segments.
Integrated circuits termed xe2x80x9ctraffic stream processorsxe2x80x9d have been designed to apply robust functionality to high-speed packet streams. Robust functionality is critical with today""s diverse but converging communication systems. Stream processors must handle multiple protocols and inter-work between streams of different protocols. Stream processors must also ensure that quality-of service constraints, priority, and bandwidth requirements are met. This functionality must be applied differently to different streams, and there may be thousands of different streams.
Co-pending applications Ser. No. 09/639,966, 09/640,231 and 09/640,258, the content of which is hereby incorporated herein by reference, describe a integrated circuit for processing communication packets. As described in the above applications, the integrated circuit includes a core processor. The processor handles a series of tasks, termed xe2x80x9ceventsxe2x80x9d. Most events have an associated service address, xe2x80x9ccontext informationxe2x80x9d and xe2x80x9cdataxe2x80x9d. When an external resource initiates an event, the external resource supplies the core processor with a memory pointer to xe2x80x9ccontextxe2x80x9d information and also supplies the data to be associated with the event.
The context pointer is used to fetch the context from external memory and to store this xe2x80x9ccontextxe2x80x9d information in memory located on the chip. If the required context data has already been fetched onto the chip, the hardware recognizes this fact and sets the on chip context pointer to point to this already pre-fetched context data. Only a small number of the system xe2x80x9ccontextsxe2x80x9d are cached on the chip at any one time. The rest of the system xe2x80x9ccontextsxe2x80x9d are stored in external memory. This context fetch mechanism is described in the above referenced co-pending applications.
In order to process an event, the core processor needs the service address of the event as well as the xe2x80x9ccontextxe2x80x9d and xe2x80x9cdataxe2x80x9d associated with the event. The service address is the starting address for the instructions used to service the event. The core processor branches to the service address in order to start servicing the event.
Typically, the core processor needs to access a portion of the xe2x80x9ccontextxe2x80x9d associated with the event so the appropriate part of the xe2x80x9ccontextxe2x80x9d is read into the core processor""s local registers. When this is done, the core processor can read, and if appropriate modify, the xe2x80x9ccontextxe2x80x9d values. However, when the core processor modifies a xe2x80x9ccontextxe2x80x9d value, the xe2x80x9ccontextxe2x80x9d values stored outside of the core processor register must be updated to reflect this change. This can happen under direct programmer control or using the method described in the above referenced patent (U.S. Pat. No. 5,748,630). The xe2x80x9cdataxe2x80x9d associated with an event is handled in a manner similar to that described for the xe2x80x9ccontextxe2x80x9d.
In the circuit described in the above references co-pending applications, the processing core performed a register read which returned a pointer to the context, data, and service address associated with the next event. The processing core then needed to explicitly read the context and data into its internal register set.
The present invention frees the core processor from performing the explicit read operation required to read data into the internal register set. The present invention expands the processor""s register set and provides a xe2x80x9cshadow registerxe2x80x9d set. While the core processor is processing one event, the xe2x80x9ccontextxe2x80x9d and xe2x80x9cdataxe2x80x9d and some other associated information for the next event is loaded into the shadow register set. When the core processor finishes processing an event, the core processor switches to the shadow register set and it can begin processing the next event immediately. With short service routines, there might not be time to fully pre-fetch the xe2x80x9ccontextxe2x80x9d and xe2x80x9cdataxe2x80x9d associated with the next event before the current event ends. In this case, the core processor still starts processing the next event and the pre-fetch continues during the event processing. If the core processor accesses a register which is associated with part of the context for which the pre-fetch is still in progress, the core processor will automatically stall or delay until the pre-fetch has completed reading the appropriate data. Logic has been provided to handle several special situations, which are created by the use of the shadow registers, and to provide the programmer with control over the pre-fetching and service address selection progress.