Non Uniform Memory Access Architecture (hereinafter briefly referred to as NUMA) can satisfy demands for high performance computation based on its advantage of scalability, and has increasingly wide applications on medium and high end servers. However, NUMA remote-end access delay is a bottleneck in improving system performance, and especially in the case of multiple kernels and heavy kernels, there are numerous nodes and the remote-end access delay will increase as the number of nodes increases. With respect to data frequently accessed by the system, such as kernel code and kernel read-only data, normally only one copy of the data is stored in a system, and when a process running at a remote end needs to be switched to the kernel, the remote-end access delay will become one of the key factors that influence the performance.
In order to solve the aforementioned problem of remote-end access delay, in the prior art, a copy of a kernel code is saved in each of the nodes so that an access to the kernel code becomes an access to a local memory, wherein a paging technology implementation mechanism is employed to map a same linear address to different physical addresses and record the mapping relation on the respective nodes in respective kernel page tables. Once there is a process running on a node, a kernel page table portion on the node will be synchronized to a page table of the process, thereby enabling the process accessing the local kernel code and read-only data.
However, when making the present application, the inventors find that the prior art has at least the following drawbacks: since the process needs to be synchronized with the kernel page tables on the respective nodes, when the process migrates among the nodes, the contents of its page table needs to be frequently modified, thereby influencing the system performance.