As the use of computer systems grows, organizations are becoming increasingly reliant upon them. A malfunction in the computer system can severely hamper the operation of such organizations. Thus organizations that use computer systems are vulnerable to users who may intentionally or unintentionally cause the computer system to malfunction.
One way to compromise the security of a computer system is to cause the computer system to execute software that performs harmful actions on the computer system. There are various types of security measures that may be used to prevent a computer system from executing harmful software. One example is to check all software executed by the computer system with a "virus" checker. However, virus checkers only search for very specific software instructions. Many methods of using software to tamper with a computer's resources would not be detected by a virus checker.
Another very common measure used to prevent the execution of software that tampers with a computer's resources is the "trusted developers approach". According to the trusted developers approach, system administrators limit the software that a computer system can access to only software developed by trusted software developers. Such trusted developers may include, for example, well know vendors or in-house developers.
Fundamental to the trusted developers approach is the idea that computer programs are created by developers, and that some developers can be trusted to not have produced software that compromises security. Also fundamental to the trusted developers approach is the notion that a computer system will only execute programs that are stored at locations that are under control of the system administrators.
Recently developed methods of running applications involve the automatic and immediate execution of software code loaded from remote sources over a network. When the network includes remote sources that are outside the control of system administrators, the trusted developers approach does not work.
One attempt to adapt the trusted developers approach to systems that can execute code from remote sources is referred to as the sand box method. The sand box method allows all code to be executed, but places restrictions on remote code. Specifically, the sand box method permits all trusted code full access to a computer system's resources and all remote code limited access to a computer system's resources. Trusted code is usually stored locally on the computer system under the direct control of the owners or administrators of the computer system, who are accountable for the security of the trusted code.
One drawback to the sandbox approach is that the approach is not very granular. The sandbox approach is not very granular because all remote code is restricted to the same limited set of resources. Very often, there is a need to permit remote code from one source access to one set of computer resources while permitting remote code from another source access to another set of computer resources. For example, there may be a need to limit access to one set of files associated with one bank to remote code loaded over a network from a source associated with that one bank, and limit access to another set of files associated with another bank to remote code loaded over a network from a source associated with the other bank.
Providing security measures that allow more granularity than the sand box method involves establishing a complex set of relationships between principals and permissions. A "principal" is an entity in the computer system to which permissions are granted. Examples of principals include processes, objects and threads. A "permission" is an authorization by the computer system that allows a principal to perform a particular action or function.
When code is received for a particular source, the set of permissions appropriate for the security of the computer system must be assigned to the code. If a set of permissions inappropriate for the security of the computer system is assigned to the code, the integrity and security of the computer system's resources may be compromised. For example, a routine from the trusted source may perform security sensitive operations and use security mechanisms to ensure secure performance of such operations. Thus, it is appropriate to grant to that routine permissions that allow access sensitive resources. On the other hand, a routine from an untrusted source should not be granted those same permissions.
Like most software, code from trusted sources and remote sources contain identifiers (i.e. names) used to identify entities such as routines, functions, methods, or classes. The identifiers within code are used, for example, to identify a called routine when one routine calls another routine.
Unfortunately, some identifiers contained in remote code from one remote source may be identical to identifiers in remote code from another remote source, or identical to identifiers in trusted code. Further, it is possible that a routine contained in code from a remote source may be deliberately named with the same identifier used for a routine contained in a trusted source in the hopes that the computer executing the routine will erroneously grant the routine the same rights as the routine with the same identifier provided by the trusted source.
When the same identifier is used for routines from more than one source, an ambiguity arises as to which routine is being specified. If the wrong routine is invoked, security mechanisms may be averted and the security of a computer system's resources may be compromised.
Based on the foregoing, it is clearly desirable to provide a system and method for assigning permissions to code from various sources appropriate for the security of the computer system executing the code. It is further desirable to provide a mechanism for resolving ambiguities among identifiers used in code in a manner that ensures the security of the computer system.