Processors typically perform computational tasks in various applications, which may include embedded applications associated with portable or mobile electronic devices. The ever-expanding feature set and enhanced functionality associated with these electronic devices generally demands ever-more computationally powerful processors. For example, most modern processors store recently executed instructions and recently used data in one or more cache memories that an instruction execution pipeline can readily access to capitalize on spatial and temporal locality properties associated with most programs or applications. In particular, a cache generally refers to a high-speed (usually on-chip) memory structure comprising a random access memory (RAM) and/or corresponding content addressable memory (CAM).
Beyond the last cache level, many processors have main memories with multiple different memory types that typically operate according to different throughputs, power efficiencies, capacities, and latencies. For example, the different memory types that may be used in main processor memories may include wide input/output (I/O) memory, serial memory, extreme data rate (XDR) or mobile XDR (M-XDR) memory, double data rate (DDR) memory, low power DDR (LPDDR) memory, stacked memory interface (SDI) architectures, and external bus interface (EBI) memory architectures, among others. Existing techniques attempting to balance or otherwise manage tradeoffs between the different throughputs, power efficiencies, capacities, latencies, and other characteristics associated with different memory types tend to assign fixed addresses to the different memory types. For example, one proposed solution to efficiently utilize processor memory is to configure how to allocate different memories within software at design-time. However, this proposed solution suffers from various drawbacks, including that statically defining how to assign the more efficient memory may result in the more efficient memory sitting idle in many use cases. Another proposed solution to this problem is to have a dynamic memory allocation routine handle the assignments. However, this proposed solution also has drawbacks, including that actual memory utilization may not be known to the dynamic memory allocation routine at the time that the memory is assigned.
Accordingly, processor optimizations that increase execution speed, reduce power consumption, and enhance memory utilization are desirable.