In an extensible software system where subjects and pieces of code are trusted to varying degrees, it is both important and challenging to manage the permissions of running programs in order to avoid security holes. One particular difficulty that has attracted considerable attention is the so-called “confused deputy” problem, which has been addressed by the technique of stack inspection. The present invention is by no means limited to methods for addressing the confused deputy problem; nonetheless, in order to appreciate the background of the present invention, it is helpful to understand this problem and attempted solutions.
Confused Deputy Problem and Stack Inspection
The confused deputy problem may be described as follows. Suppose that a piece of untrusted code calls a piece of trusted code, such as a library function, perhaps passing some unexpected values as arguments to the call, or in an unexpected execution state. The trusted code may invoke some sensitive, security-critical operations, for example, operations on an underlying file system. It is important that these operations be invoked with the “right” level of privilege, taking into account that the call is the result of actions of untrusted code. Moreover, this security guarantee should be achieved under the constraint that we would not expect every library function to be rewritten; only a fraction of the code may ever be security-aware.
One approach to addressing this problem is the technique called “stack inspection,” which is presently embodied in the CLR (Common Language Runtime) and in Java Virtual Machines. Following this technique, an upper bound on its permissions is associated statically (that is, before execution) with each piece of code, typically by considering the origin of the piece of code. For example, whenever a piece of code is loaded from an untrusted Internet site, it may be decided that this piece will have at most the right to access temporary files, but will have no other rights during execution. At run-time, the permissions of a piece of code are the intersection of all the static permissions of the pieces of code on the stack. Thus, the run-time permissions associated with a sensitive request made by a trusted piece of code after it is called by an untrusted piece of code include only permissions granted statically to both pieces of code. An exception to this policy is made for situations in which a trusted piece of code explicitly amplifies the run-time permissions. Such amplifications are dangerous, so they should only be done after adequate checking.
Although the stack inspection technique has been widely deployed, it has a number of shortcomings. One of the main ones is that it attempts to protect callees from their callers, but it ignores the fact that, symmetrically, callers may be endangered by their callees. (Similar issues arise in connection with other flows of control such as exception handling, callbacks, and higher-order programming.) If A calls B, B returns (perhaps with an unexpected result or leaving the system in an unexpected state), and then A calls C, the call to C depends on the earlier call to B, and security may depend on tracking this dependency, which stack inspection ignores. In theory, one could argue that A should be responsible for checking that B is “good” or that it does not do anything “bad”. However, this checking is difficult and impractical, for a variety of reasons. In particular, A may be a library function, which was coded without these security concerns in mind, and which we may not like to recode. (Indeed, one of the appeals of stack inspection is that it avoids some security problems without the need to recode such functions.) Moreover, the call to B may be a virtual code (that is, a dynamically dispatched code), whose target (B) is hard to determine until run-time.
This shortcoming of stack inspection is a real source of errors with serious security ramifications. From a more fundamental perspective, we can argue that stack inspection addresses only one aspect of the “confused deputy” problem. Other techniques are needed in order to achieve a more complete solution, with satisfactory practical and theoretical properties.
Stack inspection presents other difficulties because of its somewhat exotic, ad hoc character. It is a unique mechanism, separate and distinct from other security mechanisms such as may be provided by an underlying operating system. As a result, it is hard to translate the security state of a virtual machine that uses stack inspection into a corresponding state that would be meaningful at the operating system level. Such a translation is often desirable when a thread in the virtual machine makes a call outside the virtual machine (a local system call, or even a call across a network). In another direction, it is hard to relate stack inspection to execution models for certain high-level languages. For example, programmers in functional languages such as Haskell are not encouraged to think in terms of stacks, so the stacks of the CLR implementation are not an appropriate abstraction for their understanding of security. Finally, the fact that stack inspection is directly related to a particular stack-based execution strategy complicates and hinders optimizations that would affect the stack.
In light of these difficulties and shortcomings, we should look for alternatives to stack inspection. An interesting idea is to rely on information-flow control, of the kind studied in the security literature (particularly in the context of multilevel security). Unfortunately, information-flow control has rarely been practical, and it is not clear whether it can be useful in the CLR and related systems. Nevertheless, it provides an interesting point of comparison and theoretical background; the work of Fournet and Gordon explores the application of techniques directly based on information-flow control (see, Fournet and Gordon, Stack Inspection: Theory and Variants, in 29th ACM Symposium on Principles of Programming Languages (POPL '02), pp. 307–318, January 2002).