1. Field
This disclosure generally relates to the design of a unified cache structure. More specifically, this disclosure relates to accessing a translation table entry from a unified cache that can simultaneously store program instructions, program data, and translation table entries.
2. Related Art
Computer memory is typically divided into a set of fixed-length blocks called “pages.” An operating system can give a program which is accessing such pages the impression that it is accessing a contiguous address space that is larger than the actual available physical memory by using a virtual memory abstraction. During operation, the operating system and hardware of the computing device translate virtual addresses accessed by the program into physical addresses in the physical memory.
Accessing a virtual address typically involves using specialized translation hardware that uses a translation table entry (TTE) to determine a corresponding physical memory address. Unfortunately, while the typical physical memory size of computing devices has grown significantly in recent years, the need to remain compatible with existing software has restricted page sizes to page sizes chosen years ago. For instance, the common page sizes of 4 KB and 8 KB are very small in comparison to the size of a typical physical memory. The combination of small page sizes and large memory sizes results in a large number of TTEs, especially for high-end systems that support multiple terabytes of physical memory. Moreover, the specialized translation hardware typically cannot cache all of the TTEs in use, and the overhead of loading a required TTE into cache can be high. Furthermore, the specialized hardware structures and associated software involved in handling TTE “misses” can be quite complex.
Hence, what is needed are hardware structures and techniques for managing TTEs without the above-described problems of existing techniques.