One feature—and undeniable advantage—of a general-purpose computer is its ability to perform a limitless array of functions. A computer has a set of instructions that it can carry out. A programmer can enable a computer to perform any task within its physical capabilities—e.g., mathematical computation, storage/retrieval of data, input/output, encryption/decryption, etc.—simply by providing the computer with the instructions (i.e. a program) to perform such a task. While the boundless versatility of the computer has been a boon to nearly every field of human endeavor, this same versatility also has a downside: since a computer can perform nearly any function, it can be instructed to do bad as well as good. The same computer that has been programmed to perform banking transactions, or restrict access to corporate secrets, or enforce licensing terms for copyrighted information, could also be programmed to raid customer bank accounts, divulge corporate secrets, or make illegal copies of copyrighted content. Any function that has been entrusted to a computer can be sabotaged by a malevolent programmer with unfettered access to the computer's capabilities. Thus, the task of building a computer that is resistant to such sabotage often comes down to limiting access to some of the computer's resources, so that those resources can only be used under appropriate circumstances.
One important set of resources to which access can be limited is the set of resources that store data—e.g., the computer's memory, registers, etc. These data storage resources may store valuable or sensitive data, such as cryptographic keys that protect commercially significant information, or passwords that protect access to bank accounts. The existence of this type of data presents a dilemma with regard to its use in a computer. For example, a computer that uses cryptography to protect information must know the cryptographic key that decrypts the information (or at least some representation of that key) and must be able to use this key to decrypt the information under the right circumstances. However the computer cannot give unfettered access to this key or else a dishonest person could simply distribute copies of the key to everyone in the world, which would destroy the protection scheme. The same can be said of various types of information: passwords, corporate secrets, and even the code that protects keys, passwords, and secrets. The computer needs this information to be in memory so that it can be used legitimately, but the computer must protect this information from being use illegitimately or maliciously. In view of these examples, it can be seen that much computer security can be achieved if some of the computer's memory (and other data storage resources) can be cordoned off so that access is granted when the attendant circumstances are right, and denied when they are not. Resources that have been cordoned off in this manner are sometimes called “curtained memory.”
Various systems exist in which access to data storage is at least somewhat limited. For example, most modern operating systems implement the concept of an “address space,” where each process is assigned (generally on a continually-changing basis) certain pages or segments of physical memory that the process can access through it's virtual memory mappings, and where a process cannot access pages (or segments) that are in another process's address space. In some sense, this scheme limits access to memory, since certain portions of the memory can be accessed only if the access request originates from the process to which the memory portion belongs). However, this scheme is easily subverted. Some processors allow physical memory to be accessed directly (i.e., without using the virtual memory mappings), so a process could simply execute an instruction to access a given physical address regardless of whether that address had been assigned to the process's address space. Even in a processor that disallows direct physical addressing of most memory (e.g., the INTEL x86 family of processors), the virtual memory mappings are generally stored in accessible memory, so a process can access memory outside of its address space simply by changing the virtual memory mappings to point to a physical address that the process is not supposed to access.
Some systems attempt to prevent unauthorized access requests by evaluating the allowability of each access request before it is executed. For example, a processor could trap all memory access instructions so that an exception handler can evaluate each memory access request. However, such a system is inherently inefficient, since every access request must await evaluation before it can proceed.
What is needed is a way to define the logical conditions under which a limitation on access to resources can be ensured and perpetuated, and a system that can control access to resources by taking advantage of these logical conditions without having to specifically evaluate each access request. No such system has been realized in the prior art.