1. Field of the Invention
The present invention relates generally to an improved data processing system and in particular to a computer implemented method and apparatus for managing data. Still more particularly, the present invention relates to a computer implemented method and apparatus for managing data in a multi-level cache system.
2. Description of the Related Art
A cache is a component in a data processing system used to speed up data transfer. A cache may be temporary or permanent. With respect to caches used by processors, these types of caches are typically used to allow instructions to be executed and data to be read and written at a higher speed as opposed to using main memory. Instructions and data are transferred from main memory to cache in blocks. This transfer is typically performed using a look-ahead algorithm. When instructions in the routine are sequential or the data being read or written is sequential, a greater chance is present that the next required item will already be present in the cache. This situation results in better performance in the data processing system.
Examples of caches include memory caches, hardware and software disk caches, and page caches. With respect to caches used by microprocessors for executing code, many systems are built in which a level one cache is provided in which the level one cache is accessed at the speed of the processor. This level one cache also is referred to as a L1 cache. Additionally, most systems also include a level two cache, also referred to as a L2 cache. This L2 cache is often integrated with the processor. For example, a processor that is placed on a motherboard often really contains two chips. One chip contains the processor circuit with an L1 cache. The other chip contains an L2 cache. These types of systems also include a level three or L3 cache. This L3 cache is often designed as a special memory bank that is located on the motherboard, providing faster access than main memory, but slower access than an L2 or L3 cache on the processor itself.
Entries in the L2 cache are the most current and most entries in the L2 cache need to be reloaded when switching virtual partitions in a data processing system. Vitalization through shared processor logical partitions (SPLPAR) is a technology that allows multiple operating system instances to share a single processor. Prior to SPLPAR, an individual processor was the domain of a single operating system. To run multiple operating system instances on a single processor, periodically the running partition is stopped, its state saved, and another partition started on the same processor.
It is highly desirable when an L2 cache footprint for the virtual partition can be retrieved from the L3 cache. If the entries are not present in the L3 cache, the addresses are then fetched from another cache or memory. This situation results in more time being needed to retrieve this information.
When a virtual partition is switched in a processor, most entries in the L2 cache will be cast out or removed in favor of the new virtual partition's L2 cache footprint. A cache footprint is the area within the cache, which contains relevant entries to the running workload for the currently executing partition.
The L3 cache often functions as a victim cache. A victim cache is a cache that holds data or instructions that have been evicted or removed from a higher level cache. The L3 cache uses a least recently used (LRU) algorithm to cast out addresses from the L3 cache as new addresses are fetched from memory. As L2 entries from the previous entries, which were cast out, will become stale or age quickly and therefore will be cast out of the L3 cache. This situation occurs because the new virtual partition does not use the entries in the L2 cache that were present because of the prior partition.
As a result, when a switch back to the prior virtual partition occurs, the L3 cache contains very little information in terms of instructions or data for that prior partition because of the aging process. The higher level caches, the L1 and L2 caches, contain information for the current partition. As a result, when a switch is made back to the prior partition, very little information is present for that partition in the higher level cache. As a result, much of the information for this partition has to be retrieved again from main memory or some other location. This situation results in a degradation in performance when a switch occurs between virtual partitions.