Computational policies (hereafter referred to as policies) are machine understandable representations of rules that govern a computational agent's behavior. They include security policies, privacy policies, user preferences, workflow policies, and a variety of other application domains. Studies have shown that users generally have great difficulty specifying policies. While machine learning techniques have been used successfully to refine policies such as in recommender systems or fraud detection systems, they are generally configured as “black boxes” that take control over the entire policy and severely restrict the ways in which the user can manipulate it.
A broad and growing number of applications allow users to customize their policies, whether as system administrators, end-users or in other relevant roles. From the network administrator maintaining complex and verbose firewall access control lists to the social networking (e.g., Facebook®) user struggling with the site's privacy settings, studies have consistently shown that novice and expert users alike find it difficult to effectively express and maintain such policies. In one study, for instance, test users asked to express file permission policies within the native Windows® XP interface achieved very low accuracy rates, thus reflecting a significant gap between the users' intended policies and the policies that they manage to express in policy specification languages and their associated interfaces.
Given this difficulty, it is highly desirable to support users in the tasks of policy specification and maintenance, with the aim of helping them narrow this gap. While a number of machine learning applications rely on simple forms of user feedback to improve their performance (e.g., spam filters or recommender systems employed by Amazon and Netflix), little work has been done to develop configurations of these techniques that support closer collaboration between machines and users. Most recommender systems base their recommendations on explicit and/or implicit user ratings of products or services they have been presented. In these systems, however, the user does not have transparency into the underlying policies upon which the system bases its recommendations and, accordingly, the systems' underlying policies appear as a black-box to the user. This makes it significantly more difficult for a user to modify the policy, be it because it does not yet adequately reflect the user's intent or because the user's intended policy has suddenly changed. This same limitation applies to environments where a policy is intended to capture the preferences of multiple users (e.g., multiple system administrators and/or end-users in a complex firewall deployment).