1. Field of the Invention
The present invention relates to a method of managing memory power, and more particularly, to a method for reducing memory power consumption using a software technique.
2. Description of the Related Art
With the development of semiconductor process techniques, lots of technical limitations on manufacturing embedded systems have been mitigated. Improvement of definition due to the development of process techniques enables production of chips having performance higher than conventional one even though they have small sizes. However, the growing embedded systems have a problem that they are used only for a short period of time. Definition improvement increases consumption power though it enhances integration of semiconductor devices. To increase the performance of a processor, that is, a processing speed, the rate of a clock signal provided to the processor should be raised. The clock rate is increased in proportion to the level of a voltage provided, and thus power used in the processor is proportional to the square of the voltage supplied to the processor. Accordingly, battery capacity of a built-in system has reached the level that is difficult to endure consumption power of the system, and a short use time of the system aggravates inconvenience of users.
A memory used in a computer is a volatile storing device generally composed of a RAM (Random Access Memory). The RAM is classified into a static RAM (SRAM) and a dynamic RAM (DRAM). The DRAM is mainly used for a main memory requiring large capacity because it has a simple structure and needs small power consumption and low cost. The SRAM is used for a cache memory because it has small storage capacity for its cost while having a high access speed.
Recently, a double data rate DRAM (DDR-DRAM) and Rambus DRAM (RDRAM) developed from the DRAM have been proposed. The RDRAM has a memory address allocation method largely different from conventional systems because of its transfer rate and improvement of buses used. Specifically, in the RDRAM, a memory bus size is identical to the bandwidth of each chip but memory address allocation is continuously made in one chip to reduce the number of activated chips. Because of this, the RDRAM is adopted for most systems in conventional studies.
Conventional techniques for reducing memory power consumption are focused on exclusively setting a memory region for an operating process. A computer main memory system consists of banks, and the operating state of a memory chip can be controlled bank by bank.
FIG. 1 illustrates state transition diagram of a conventional RDAM. In FIG. 1, the magnitude of power consumed in each state of the RDRAM and a period of time required for state change are shown. An RDRAM module can have four states and different modules can have different states. There is no data loss in any state. A state change in the module is made through a memory controller device, and state transition can be accomplished by operating a specific register of the controller through a PCI bus.
A memory waits in a stand-by state. When the memory receives a request for a read/write operation, the state of the memory is changed to an attention state to carry out the operation corresponding to the received request. After a predetermined lapse of time, the attention state is changed to the stand-by state. That is, only the stand-by and attention states are used in the conventional technique. However, because the memory state can be changed to a nap state or a power-down state requiring lower consumption power for devices which are not substantially used, power consumption can be reduced by using this low power state.
FIG. 2 illustrates an example of memory allocation of a PADRAM (Power Aware Page Allocation DRAM). The method shown in FIG. 2 proposes a continuous memory allocation technique, which continuously allocates memories until all the memories of selected banks are allocated instead of random selection of memory banks. That is, memory banks started to be used are continuously allocated irrespective of a process. Using this method, the number of memory banks used by the process can be reduced and other banks can be converted into a low power state while the process is executed to result in a decrease in power consumption. Furthermore, the aforementioned method proposes additional hardware capable of converting a memory power state and changes the power state to reduce power consumption. However, this method has a shortcoming that many processes share one bank because it is focused on minimization of use of banks without regard to a multi-process environment.
There is another conventional method using a scheduler of an operating system. This method allocates banks randomly. Specifically, a memory bank currently actively used becomes a memory bank used by a currently operating process. The operating system changes a memory state at the moment of time of context switch between processes, which occurs when the process is executed, with the memory bank to activate a memory bank required for the future executions of processes and covert other banks into a low power state to reduce a large amount of power.
FIG. 3 illustrates an example of memory allocation of a PAVM (Power Aware Virtual Memory). Referring to FIG. 3, memory banks used by processes are arranged exclusively only using a software technique and states of the memory banks are changed through a scheduler. The memory banks are classified into a shared memory bank and a general memory bank. A shared memory is allocated to the shared memory bank and a general memory is allocated based on bank information that each process has. And, general memory banks are arranged such that they are not overlapped for each process to separate an instantaneously used region of a memory from the memory.