1. Field of the Invention
The system of the present invention relates to computer memory management systems and more particularly to a method and apparatus for reducing latencies associated with table walks through translation look aside buffers in computer memory systems which utilize virtual memory addressing.
2. Art Background
A virtual memory system is one which allows addressing of very large amounts of memory, even though the main memory of the system encompasses a smaller address space. Virtual memory systems provide this capability by defining memory management units, in particular, pages or segments, have virtual memory addresses and corresponding physical memory addresses. A particular physical address may be in main memory or in slower alternate memory, such as disk space. If the physical address of the data is in main memory, the information is readily accessed and utilized. If the physical address indicates that the page is located in the alternate memory, the page is transferred or swapped to main memory, where the data can then be accessed. The transfer typically necessitates that other information be swapped out of main memory back to the alternate memory to make room for the new information. This is typically performed under the control of the memory management unit.
To increase the speed of virtual memory accesses, cache memories are also included to store recently used data and instructions. These caches are first accessed before accessing main memory for the information requested. These caches may be virtually addressed or physically addressed. However, cache memories accessed in accordance with the physical address necessitate the process of virtual to physical access translation prior to checking the cache as well as main memory.
The paging process, that is, the process of swapping pages, relies on a data structure that is indexed by the pages of memory. This data structure contains a physical address of the memory to be accessed according to the virtual address provided. This data structure containing the physical page addresses usually takes the form of a page table indexed by virtual page numbers, the size of the tables, the number of pages and the virtual address space. Page tables are usually so large that they are stored in main memory and are often paged themselves. This means that every memory access takes at least one or more times as long, as one memory access is needed to obtain the physical address and a second access is needed to obtain the data.
One technique used to minimize the cost of access time is to save the last translation performed so that the mapping process is skipped if the current address refers to the same page as the last one. In addition, to save additional time, advantage is taken of the principle of locality that is utilized for caches. If the references have locality, then the address translations for references must also have locality. By keeping these address translations in a special cache, a memory access rarely requires a second access to translate the address. This special address translation cache is referred to as a translation look aside buffer or "TLB". A TLB entry is like a cache entry wherein a tag portion holds portions of the virtual address and the data portion holds a physical page frame number, protection fields, use bits and a modified or dirty bit.
A number of different methods and techniques are available for increasing the speed of accesses to virtual memory. In one method, a more heavily pipelined memory access is utilized, wherein the TLB access is performed one step ahead of the pipeline. Another alternative is to match virtual addresses directly. Such caches are termed virtual caches. This eliminates the TLB translation time from a cache hit access situation. However, one drawback is that the process for table walk is quite time consuming and needs to be performed for each virtual address regardless of address locality.
Additional discussion on TLBs can be found in John L. Hennessey and David A. Patterson, Computer Architecture, A Quantitative Approach, (Morgan Kaufmann Publishing 1990), pages 432 to 461.