There is a need for information-flow systems that allow for expressive policy specifications. For example, in existing systems, such as Java and the Common Language Runtime (CLR), integrity levels are represented as sets of permissions, as described, for example, in Marco Pistoia, Anindya Banerjee, and David A. Naumann, Beyond Stack Inspection: A Unified Access Control and Information Flow Security Model, 28th IEEE Symposium on Security and Privacy, pages 149-163, Oakland, Calif., USA, May 2007. Each permission has the power to specify which resources it guards. Permissions are assigned to code by the class loader that loaded that code. Not all class loaders are equally trusted. Every program can implement its own class loader, which may then assign arbitrary permissions to every class it loads. A partially trusted class loader has the power to make the classes it loads completely trusted by assigning them AllPermission. Therefore, Li Gong, Gary Ellison, and Mary Dageforde, Inside Java 2 Platform Security: Architecture API Design, and Implementation, Addison-Wesley, Reading, Mass., USA, second edition, May 2003 and Marco Pistoia, Duane Reller, Deepak Gupta, Milind Nagnur, and Ashok K. Ramani, Java 2 Network Security, Prentice Hall PTR, Upper Saddle River, N.J., USA, second edition, August 1999 emphasized that partially trusted class loaders do not exist and that whoever has the power to create a new class loader is implicitly granted AllPermission.
A fundamental problem is that extant information-flow systems are insufficiently expressive. For example, the problem above is caused by the inability to consider information flow itself as information. The fact that a class C has been granted an integrity level R by a principal S should be trusted no more than S. Therefore, it is crucial that R be assigned the integrity level S of the class loader that assigned R to C. The statement that C was granted R should be trusted as much as S is trusted. In the sequel, this is written as S[R][C] using the framing notation from Fournet and Gordon, Cedric Fournet and Andrew D. Gordon, Stack Inspection: Theory and Variants, Proceedings of the 29th ACM S1GPLAN-SIGACT Symposium on Principles of Programming Languages (POPL 2002), pages 307-318, Portland, Oreg., USA, January 2002, ACM Press. Frame R denotes the integrity level of C, and frame S denotes the integrity level of R[C].
Another lack of expressivity, common in existing systems, is the inability to track influences on information-flow decisions made by the enforcement mechanism itself. For example, in standard Java and CLR security models, it is impossible to define partially trusted integrity enforcement mechanisms. Once a security manager is installed, it has the power to enforce any policy it desires by overriding the system administrator's policy decisions and by making any security check succeed. This permits granting AllPermission to arbitrary code. The fundamental problem in these models is that any decision made by an integrity enforcement mechanism is going to be considered completely trusted. For example, a security manager with trust level S returning true on an information-flow check should actually return S[true], thereby recording S's influence on true.
Systems make decisions based on trust or secrecy levels. In an integrity domain, if an intruder can trick a program into using a value with a specific trust level and the program branches over that trust level, for example, in Java's checkPermission and the CLR's Demand, the intruder will have caused an integrity violation, not through the value itself, but through the trust level of that value. Consider the case of a library method m with parameter A a. If m invokes a.foo, an intruder could inject an untrusted version of a.foo, U[a.foo], in the program by simply passing an instance of a subclass of A that is trusted only up to U. Alternatively, the intruder could decide to inject a trusted version of a.foo, T[a.foo]. At the point in which an authorization check involving a.foo is made, for example, through stack inspection in Java and the CLR, performed while a.foo is on the stack, failure and success of the check depend on the frame, U or T, of a.foo. Therefore, a malicious attacker could use the frame of a.foo as a form of storage channel to make the program take a certain branch. The fundamental problem here is that the frame of a.foo is not itself framed with the integrity level of the intruder that made the decision of which version of a.foo to pass. Of course, this problem can affect more than two levels of integrity. There is, therefore, a need for potentially unbounded levels of framing.
In a confidentiality domain, the fact that a given value v has a particular secrecy level S could itself be confidential information, perhaps with a secrecy level R≠S. The release of R may constitute as much of a confidentiality violation as the release of v. Secrecy and trust levels can be nested further and can also be interdependent. For example, a secrecy level can have an integrity level and that integrity level can have a secrecy level. Further dimensions of information flow that go beyond integrity and confidentiality may be involved in a policy decision, and these multiple dimensions can be interdependent. There is, therefore, the need for specifying and enforcing information-flow policies with multiple, interdependent dimensions.