In computer systems, it is common for one or more processors to access a memory, referred to as a "shared memory". Also, the shared memory may be a memory array that contains a number of memory modules. Access to the shared memory is generally over a shared memory bus. One such system is disclosed in copending application Ser. No. 07/546,547, entitled "HIGH SPEED BUS SYSTEM" filed on even date herewith.
Two separate uni-directional buses may connect a shared memory or memory array to a memory controller. One bus is for transmissions from the memory controller to the array and the second is for transmissions from the memory array to the memory controller.
The memory controller, in turn, may interface with a bus system that connects to memories of the system processors. This bus system may include a bidirectional bus or two uni-directional buses, with one for transmissions from the memory controller to the processor memories and the other for transmissions from the processor memories to the memory controller.
A characteristic of a system having a shared memory and number of processors is that one of the processors can issue a read command followed by an address, which when placed on the bus system is supplied not only to the memory controller but also to all of the other processors connected to the bus. This increases the system's processing speed by avoiding the need for the processor to separately notify each of the other processors.
After the memory controller receives the read command and address, it places this on the unidirectional bus between it and the memory array. The memory array responds by providing refill data on the other uni-directional bus between the memory controller and the memory array. The refill data is then placed on the bus system that supplies such data to all of the processors including, of course, the processor that requested it.
A consideration in the design of shared memory systems involving the request for, and receipt of, data from memory is memory latency, i.e., the period between the placing of a command on a bus and the returning of refill data from the memory on the bus. Without consideration of memory latency, this entire request and refill data scheme will not operate effectively since the system may not be able to handle the number of requests that are made resulting in collisions on the shared memory bus.
Most activities in any system, which include shared memory systems, require one or more cycles to complete. Typically, activities take more than one cycle and the required number of cycles vary depending on the dynamic conditions. Since this is the case, there is a strong possibility that simply placing commands and data on a common bus without controlling them may result in collisions on such a bus. Thus, such shared memory systems must have a means to prevent such collisions.
Another compounding factor is that in a single or multiprocessor system, there can be a plurality of successive read commands sent to memory. Each includes a command and an address. These read commands can be issued at a rate that exceeds the ability of the system to supply the refill data because of things such as memory latency.
Some shared memory systems include a number of state machines that are used to control placement of commands and addresses on the shared bus leading from the memory controller to the memory array. The number of state machines usually determine the number of commands and addresses a system is capable of handling at one time. Each of these separate commands and addresses that are being handled by a separate state machine, however, must be directed to different memory modules of the memory array.
If more than that number of commands and addresses are sent than can be handled by the state machines, they will be waiting for memory access at the memory controller. The memory controller, therefore, must have a queueing mechanism to handle them. Implementation of such a mechanism at the memory controller or dramatically increasing the number of state machines to expand system capabilities adds to system complexity and cost. Hence, it is desirable to accomplish memory accesses without the need to make these costly implementations.
Another consideration is a desire to efficiently use a shared bus. Normally unused or empty cycles occur between blocks of refill data on the shared bus. Preferably there should be no dead space or time on the bus. At a steady state operating condition, after each predetermined number of cycles of refill data are placed on the bus, a predetermined number of cycles of read commands and addresses should follow, immediately followed by another predetermined number of cycles of refill data. However, the elimination of dead space has to be done without causing collisions and without implementing a costly queuing mechanism.
Many systems also include a cache memory system which further complicates the collision problem. In such systems, each time a memory command and address are sent out, a check must be made to see if the requested information is contained in a cache memory of one of the system processors. If the data is found, that processor must be given access to the shared bus that links the processors, and links the processors to the memory controller. When this access is given, it will prevent other read commands from being sent out on that bus. Thus, systems must consider cache memory reads ("snoopy reads") in collision analysis.
A further complicating factor is that systems typically do not have just one type of memory in a memory array, but many types. These different memories have different read access times and, therefore, different memory latencies since read access time is part of memory latency. It follows that any method or scheme for memory bus control that depends upon a fixed timing schedule for placing commands and data on a shared bus will result in collisions on that bus. Hence there must be a method of memory bus control that considers the read access time in placing commands and addresses, and refill data on a shared memory bus.
There is a need for a memory bus control system that will overcome these problems.