1. Field of the Invention
Methods and apparatuses consistent with the present invention relate to a virtual memory system, and more particularly, to reducing a page fault rate in a virtual memory system.
2. Description of the Related Art
Due to the increase in the size of software, a system such as a computer or a laptop computer commonly employs a virtual memory system in which a portion of software is loaded into a memory instead of the entire software.
Further, because a built-in system included in a cellular phone, a smart phone, or a PDA (personal digital assistant) has many functions and is complicated, the type and size of software included in the built-in system are increasing. Therefore, the system increasingly employs a virtual memory technology. For example, in the case of a shadowing technique in which a software code is stored in a NAND (not and) flash memory and batch-loaded into a main memory when booting a system, capacity of the main memory should be increased in proportion to the size of the software. Therefore, there is a need for an effective alternative to execute large software at a reasonable hardware cost. Such an alternative is a virtual memory system that can execute large software by utilizing a minimum capacity of the main memory.
The virtual memory system is used to solve a problem due to an insufficient capacity of a main memory which is much smaller than the size of real software. Specifically, in the virtual memory system, address spaces for all tasks are not loaded into the main memory but address spaces absolutely required for executing present tasks are loaded into the main memory. The address spaces which are not stored in the main memory are stored in an auxiliary memory such as a NAND flash memory or hard disk. Accordingly, it is possible to solve the mismatch between the size of software and the capacity of the main memory.
An address space region which is necessary to execute a task may exist in an auxiliary memory. Therefore, the virtual memory system has a problem that there exists time overhead for loading a page existing in the auxiliary memory to the main memory. Since this time overhead is relatively large as compared with an access time with respect to a page in the main memory, it is very important for system performance to minimize a page loading frequency from the auxiliary memory.
In order to minimize the page loading frequency from the auxiliary memory, a page more likely to be referenced should be loaded into the main memory and a page less likely to be referenced should be stored in the auxiliary memory. That is, when a new page is loaded into the main memory, if the main memory does has enough empty space, a page least likely to be referenced in the immediate future should be replaced from the main memory among the existing loaded pages.
That is, in order to improve the system performance, it is very important to estimate reference probability of each page.
As shown in FIG. 1, a virtual memory system includes a CPU (central processing unit) 10, a cache memory 20, a TLB (translation lookaside buffer) 30, a main memory 40, an auxiliary memory 50, and a page table 45. A page necessary to execute a task is loaded into the main memory 40 so as to be executed and a cache memory 20 functions as a cache with respect to the main memory 40. The TLB 30 and the page table 45 serve to convert a virtual address to a physical address in the main memory. The page table 45 resides in the main memory and the TLB 30 functions as a cache of the page table 45.
In a related art virtual memory system, the CPU 10 accesses an arbitrary instruction or data in order to execute a program as follows.
(1) A CPU refers to a virtual address and performs indexing of a cache memory using the virtual address so as to determine whether or not desired data exists in the cache memory. If the desired data exists in the cache memory, the CPU fetches the data.
(2) If the corresponding data does not exist in the cache memory, the CPU performs indexing the TLB so as to detect a physical address of a page in which the desired data exist (2-1). If the physical address is detected in the TLB, the CPU accesses the page in the main memory and reads the desired data by using the corresponding information (2-2).
(3) If the physical address of data to read is not detected in the TLB, the CPU performs indexing a page table of the main memory so as to obtain the physical address of the data (3-1). At this moment, the data may exist in the main memory or in the auxiliary memory. If the data exists in the main memory, the CPU accesses the corresponding page and reads the data (3-2).
(4) If the data does not exist in the main memory, a page fault occurs. If a page fault occurs, a page fault handler is executed such that the corresponding page is loaded into the main memory from an auxiliary memory by using a virtual address of the page in which the page fault occurs. At this moment, if the main memory does not have an empty room enough to store a new page, a page having the lowest reference probability among the existing pages is replaced so as to store the new page in the room of the replaced page.
In a general system, hardware processes the procedures (1) to (3) except for the procedure (4) in which the page fault occurs among the above-described procedures (1) to (4). That is, generally, the procedures (1) to (3) are not performed by software. Therefore, software can not obtain information that indicates the page accessed by the CPU in the procedure (1) to (3) but only can obtain information that indicates a page in which the page fault occurs through the procedure (4). Accordingly, when evaluating the reference probability of each page, it is difficult to realize a LRU (least recently used) page replacement policy in which all of the page access information should be known.
Since the LRU policy can not be used in the virtual memory system as a page replacement policy, an NUR (not used recently) policy, such as a clock, is used. In order to use the NUR policy, an access bit is added in a page table entry as reference number 46 shown in FIG. 1. When an arbitrary page is accessed, hardware automatically sets the access bit of the corresponding page table entry to 1. By using this access bit, it can be known whether or not the page is recently accessed.
There are various NUR page replacement policies in which the access bit is utilized. For example, Mach operating system version 2.5 realizes the NUR policy by using two connection lists which include a page of access bit 1 or 2. Further, a clock page replacement policy realizes the NUR policy by using one connection list and two pointers.
FIG. 2 is a view showing a related art clock policy. In the clock policy, all of the pages in the main memory are managed as one circle list and there are two arms. A back arm 61 is used to replace a page and a front arm 62 is used to reset an access bit of the page. That is, when a page fault occurs, the back arm 61 detects pages stored in the main memory according to a round-robin method so as to replace a first page in which an access bit is 0. At this moment, the front arm 62 also accesses the pages according to the round-robin method and initializes the access bit of the accessed pages to 0. A predetermined value of an interval between the front arm 62 and the back arm 61 is always kept.
A clock replacement policy guarantees that a page recently referenced, that is, a page having an access bit of 1 is not replaced in the main memory during a predetermined period of time such that the clock replacement policy may show a similar function to the LRU. On the other hand, hardware which does not supply the access bit may emulate the access bit of the page table entry in software so as to realize NUR page replacement policy, such as the clock replacement policy.
A disadvantage of the page replacement policy in which the related art access bit is utilized or realized by emulating the access bit in software is that recently referenced information of a page in which the access bit is reset to 0 can be omitted.
For example, in the case that a clock page replacement policy is used, if an access bit of an arbitrary page is reset to 0 by the front arm, there is no modification with respect to a TLB entry of the corresponding page. Therefore, a procedure 2-1 shown in FIG. 1, that is, when a physical address of a page to be accessed is found in the TLB, the access bit of the corresponding page is not set to 1. When an entry is found in the TLB, the CPU does not access the page table, but accesses the main memory through procedure 2-2 such that there is no modification with respect to the page table.
FIG. 3 is a view showing an operation of the page replacement according to the related art.
In a block 71, while a predetermined page is replaced according to a page replacement policy, an access bit with respect to a page K included in a page table 47 is reset to 0 (S1).
In a block 72, when a CPU 11 attempts to read the page K, the CPU 11 refers to the TLB 31 (S2). Therefore, the CPU 11 does not modify the access bit of the page table 47 and directly accesses the page K (S3). As a result, even though the page K is accessed by the CPU 11, the access bit of the page table 47 remains as 0.
In a block 73, the page K is replaced according to the page replacement policy. Since the access bit of the page table 47 with respect to the page K is reset to 0, the page K is removed by the back arm. However, since the page K referred to in the block 72 is removed, the CPU 11 should read the page K again from the auxiliary memory when reading the page K afterward.
Recently, there is a problem that reference information of a recently referred page is omitted so that the recently referred page is replaced in the main memory. As a result, a page loading frequency from the auxiliary memory increases such that the entire system performance may be degraded.