A multi-port internally cached DRAM, termed AMPIC DRAM, of said copending application, later reviewed in connection with hereinafter described FIG. 1, is designed for high system bandwidth use in a system having a master controller, such as a central processing unit (CPU), having parallel data ports and a dynamic random access memory each connected to and competing for access to a common system bus interface. It provides an improved DRAM architecture comprising the multi-port internally cached DRAM that, in turn, encompasses a plurality of independent serial data interfaces each connected between a separate external I/O resource and internal DRAM memory through corresponding buffers; a switching module interposed between the serial interfaces and the buffers; and a switching module logic control for connecting of the serial interfaces to the buffers under a dynamic configuration by the bus master controller, such as said CPU, for switching allocation as appropriate for the desired data routability. This technique provides for the transfer of blocks of data internal to the memory chip, orders of magnitude faster than traditional approaches, and eliminates current system bandwidth limitations and related problems, providing significantly enhanced system performance at a reduced cost, and enabling substantially universal usage for many applications as a result of providing unified memory architecture.
In said co-pending application, a large number of system I/O resources may be supported, each with a wide data bus, while still maintaining low pin counts in the AMPIC DRAM device, as by stacking several such devices, later illustrated in connection with hereinafter described FIG. 2, with the number of system I/O resources supported, and the width of each system I/O resource bus being limited only by the technology limitations.
While such architectures, as previously stated and as described in said copending application, admirably provide a very large amount of bandwidth for each system I/O resource to access the DRAM, the system does not provide a mechanism by which one system I/O resource may send data to another system I/O resource--an improvement now provided by the present invention. As an example, if system I/O resource In has a multi-bit message that should be sent to system I/O resource n, then once the system I/O resource m has written the multi-bit message into the AMPIC DRAM stack or array, the invention now provides a mechanism for informing system I/O resource n of both the existence of such a message and the message location within the AMPIC DRAM array. In addition, upon the system I/O resource n being informed of the existence of the message and its location in the array, in accordance with the present invention, a technique is provided for allowing the system I/O resource n to extract the message from the array. While the message data is thus being distributed across the entire AMPIC DRAM array, moreover, with each element of the array holding only a portion of the data, the complete signaling information must be sent to each individual element of the AMPIC DRAM array.
The invention, in addition, provides the further improvement of a partitioning technique for allowing both several simultaneous small size transfers or single very wide transfers, using the wide system internal data bus more efficiently to accommodate for both small and large units of data transfer.