The term “Moore's Law” refers to an observation that, since the time modern integrated circuits (ICs) have first been commercially produced, the number of transistors that can economically be placed on an IC increases exponentially, doubling approximately every two years. So far, Moore's Law has proven to be reliably predictive of the capabilities of digital electronic devices, such as the processing speed of central processing units (CPUs). A related trend is seen in the storage capacity for random access memory (RAM). Over time, RAM storage capacity has increased at about the same rate as processing power.
One side-effect of the huge growth in RAM capacity is that some computing architectures are not capable of natively accessing the large amounts of RAM that may be available on modern hardware. For example, at certain times during operation of a computer, the instruction processor(s) which execute software are limited in the address space they can directly access. For example, a processor may be limited to 16-bit or 32-bit addresses in certain modes, but have full address capability of 40 bits or greater during normal operation. In such a case, the processor would only be able to directly access memory locations up to FFFFh (16 bit) or FFFF_FFFFh (32 bit), even though memory locations up to FF_FFFF_FFFFh may exist.
Nonetheless, there are advantages in having large amounts of available RAM. As a result, computing architecture adaptations have been made to allow limited address space processors/processor modes to access data and instructions anywhere in available system memory. In some cases, system software has been used to ensure memory access conforms to the address limitations. While relatively easy to implement, such a software solution generally slows performance.