Caches are frequently implemented in various computer architectures to facilitate rapid access to frequently-used data. Such architectures typically support cache operation instructions to manage the caches. For example, a single processor architecture (i.e., a single core system) may include an appropriate instruction that, when executed, will cause all caches of the system to be flushed.
Existing cache operation instructions have generally been developed for single core systems and are implemented to operate on a cache line basis or over an entire caching structure as a whole. Despite the introduction of multi-level cache structures, existing cache operation instructions such as those developed for x86 compatible processors do not provide fine-grained control over individual levels of a given cache hierarchy.
Moreover, existing cache operations are generally ill-suited to provide satisfactory performance in architectures using plural processors (i.e., dual core or multi-core systems). These various limitations are particularly troublesome in power-managed systems where any combination of the cores and the caches might be powered up or down at any given time.
For example, a multi-core system may employ multiple caches configured in a multi-level cache hierarchy. Some of the caches may be used exclusively by individual processors, while others may be shared by several or all of the processors. During operation, it may be advantageous to power down one core while allowing a second core to continue processing. Using existing cache operations, the entire cache hierarchy would be flushed when the first core is powered down, including those caches shared by the second core. As a result, the performance of the second core will likely suffer as the data contents of the entire shared cache must be re-established.