This invention relates to a method and apparatus for controlling memory access in a system where at least a first and a second processor each share a common memory and wherein the first processor has a write buffer, in which it stores words prior to writing them in the common memory, and a cache for receiving words from the common memory.
In a computer system with two processors sharing a common memory, one processor can include some optimisations in its memory access path, which are designed to decouple the memory accesses generated by the processor from the physical memory. Such optimisations include a cache, which holds copies of words read recently, or expected to be read soon, and a write buffer, which accepts words from the processor and holds them until they can be written to the common memory. The second processor could have a similar arrangement, but can be assumed (for example) to be directly connected to the common memory. As the first and second processors share data structures in the memory, it is important that they both have the same view of the memory contents. The cache and write buffer make this more difficult, because they decouple the first processor from the true memory contents. However, they would normally allow the first processor to run faster than if it were connected directly to the memory.
The present invention seeks to solve this cache problem.
The present invention provides a method of controlling memory access in a system where at least a first and a second processor each share a common memory and wherein the first processor has a write buffer, in which it stores words prior to writing them in the common memory, and a cache for receiving words from the common memory, the method including the steps of:
mapping the common memory into the address space of the first processor so that, in a first mapping,the first processor accesses the common memory directly and in a second mapping, the cache is enabled;
accessing the common memory directly with the first processor in the first mapping and also accessing the common memory directly with the second processor when the first and second processors share data that is read from or written into the common memory;
accessing the cache with the first processor in the second mapping for reading and writing data local to the first processor;
tagging information written into the write buffer; and
flushing the tagged information into the shared memory before the shared memory can be accessed by the second processor.
In effect, the memory is mapped twice into the address space of the first processor, one mapping accessing the memory directly, while the other has the cache enabled. The first processor uses the uncached mapping when reading or writing any data shared with the second processor, but uses the cached mapping for its private data (e.g. programme code, stacks, local variables). This preserves most of the benefit of having the cache, because the data shared with the other processor would be normally read or written only once.
Although the cache is not very effective for shared data, this is not true of the write buffer, the first processor still benefits from not having to wait for memory writes to finish, even when writing shared data. The first processor still obtains most of the advantage of having the write buffer, but the two processors also have a coherent view of the common memory.
An embodiment of the invention will now be described with reference to the accompanying drawing, in which: