1. Technical Field of the Invention
The present invention relates to computer systems and, in particular, to a system and method for improving access to memory of a page type.
2. Description of Related Art
As is well-known to those skilled in the art, the rapid increase in processor speed have greatly outpaced the gains in memory speed. Consequently a chief bottleneck in the performance of current computers is the primary memory (also called main memory) access time. Conventional techniques to overcome this performance hindrance place a small and fast memory called cache memory in between the processor and the primary memory. Information frequently read from the primary memory is copied to the cache memory so future accesses of that information can be made from the fast cache memory instead of from the slower primary memory. For performance and cost reasons several levels of cache memories are used in modern computers. The first level, also the smallest and fastest cache memory, is called L1 cache and placed closest to the processor. The next level of cache memory is consequently called L2 cache and placed in between the L1 cache and the primary memory.
For most systems the traditional use of cache memory works fine but in complex real time systems, such as, for example, modern telecommunication systems, the amount of code executed and data handled is very large and context switching, switching between different processes, is frequent. In these complex real time systems the locality of information, program code and data, stored in the primary memory is low. Low locality means that a large part of the accessed information is spread out in the primary memory, low spatial locality, or that only a small part of the accessed information is referenced frequently, low temporal locality. With low locality the cache hit ratio, that is how frequently information can be accessed from the cache memory, will also be low as most information will be flushed out of the cache memory before it is needed again. Consequently the normal use of cache memories, especially the L2 cache and above, will not be effective in complex real time systems.
It would therefore be advantageous if the use of cache memories could be more effective in complex real time systems.
In systems where the cache hit ratio is low, a lot of effort has been put on selecting what information to write to the cache memory. This has resulted in advanced prediction algorithms, which take some extra time from the normal execution and also delay the writing of information back to the cache memory.
It would therefore be advantageous if the selection of the information to store in the cache memory could be simplified.
In traditional systems the writing of the information to store in the cache memory is done after the information is read from the primary memory, on a separate memory access cycle, which takes extra time and cause execution delays.
It would therefore be advantages if the information to store in the cache memory could be written to the cache memory with less delays than in the prior art.
A typical conventional memory is built up of a large number of memory cells arranged in a number of rows and columns. The rows and columns of memory cells create a memory matrix. Most memory used today is of page type, e.g. FPM DRAM, EDO DRAM and SDRAM. A memory cell in a page type memory can't be accessed until the row containing this memory cell has been opened. Accessing a new row, often referred to as opening a new page, takes some extra time called page setup time. Consequently accessing information in a new, not opened, page normally takes a longer time, for SDRAM often much longer, than accessing information from an open page in the primary memory. For systems where the cache hit ratio is low, the primary memory will be accessed frequently and an extra delay will be encountered each time a new page is opened in the primary memory.
It would therefore be advantageous if the execution delay when accessing a new page in primary memory could be reduced, especially in systems that normally have a low cache hit ratio.
In traditional systems, where the access time for the cache memory is typically much shorter than for the primary memory, the primary memory is accessed only after the cache memory has been accessed and a cache miss occurred. Waiting for a cache miss before accessing the primary memory thus causes an extra delay in the primary memory access.
It would therefore be advantageous to reduce the access time for the primary memory when a cache miss occurs.
It is, therefore, a first object of the present invention to provide a system and method for a more efficient use of cache memory, especially in systems where the cache hit ratio normally is low.
It is a second object of the present invention to simplify the selection of information to store in the cache memory.
It is a third object of the present invention to reduce the extra time needed to write information to the cache memory.
It is a fourth object of the present invention to reduce the execution delay normally encountered when a new page is accessed in primary memory.
It is a fifth object of the present invention to reduce the delay in accessing the primary memory after a cache miss.