The use of business rules and policies to externalize business and operational logic from an application is an important concept and approach to building large business applications and to new areas such as self-managing systems or autonomic computing systems. Business rules and policies are statements that are intended to be readable and modifiable by non-technical users and executable by an underlying mechanism such as a rule engine or a Java Virtual Machine (JVM), allowing application logic to be authored and modified external to the application.
One of the key aspects of using these business rules or policies is the ability to specify a priority for each of the rules in a set of business rules. A business rule set is a collection of rules selected and arranged to achieve a desired goal. Assigning a priority to each rule contained in the rule set controls the sequence of execution of those rules in the rule set. Typically, priorities are initially established and assigned by a rule author; however, priority of the rules can be subsequently modified in accordance with application specific parameters, i.e. different situations and execution environments.
The use of policy-based systems has become increasingly common. For example, the emerging areas of autonomic and on demand computing are accelerating the adoption of policy-based systems. As the requirements on policy-based systems become more complex, traditional approaches to the implementation of such systems, for example relying entirely on simple “if [condition] then [actions]” rules, become insufficient. New approaches to the design and implementation of policy-based systems have emerged, including goal policies, utility functions, data mining, reinforcement learning and planning.
One issue regarding the use or implementation of policy-based systems is establishing the same level of trust among users and system administrators for policy-based systems as exists for traditional systems. Unless policy-based systems are trusted at least as much as traditional systems, increases in the acceptance level of policy-based systems will be hindered. In addition, a system administrator needs to know that a policy-based system will help the administrator's system perform better. Unfortunately, current approaches to the design and implementation of policy-based systems do nothing to reduce administrators' skepticism towards policy-based automation.
In general, trust can be viewed as an abstract concept that involves a complex combination of fundamental qualities such as reliability, competence, dependability, confidence and integrity. Research has been conducted in the area of multi-agent systems on the concept of trust. In this research, trust is defined quantitatively as the level of dependability and competence associated with a given software agent as compared to other similar software agents. As policy-based systems evolved from the use of relatively simple “if/then” rules to more sophisticated and powerful components that utilize goals and utility function policies, data mining and reinforcement learning among others, the level of trust associated with a given policy-based system has become an important factor in determining the use of that policy-based system as an integral part of overall systems management. Information Technology (IT) managers are likely to be hesitant to trust an autonomous policy-based system to run the entire IT operations without first establishing a certain level of trust in that autonomous policy-based system. Therefore, trust between a policy-based system and the users of that system is needed to encourage adoption and implementation of a given policy-based system.
Current issues regarding trust in policy-based systems have concentrated on user interface issues. In R. Barrett, People and Policies, Policies for Distributed Systems and Networks (2004), the necessity of gaining a user's trust is discussed as are ways to make policy-based systems trustworthy. E. Kandogan and P. Maglio, Why Don't You Trust Me Anymore? Or the Role of Trust in Troubleshooting Activity of System Administrators, Conference on Human Computer Interaction (2003), addresses the role of trust in the work of system administrators. Again, the majority of this work focuses on user interface matters, rather than on the design and operation of the system itself. Very few studies have been conducted on the issue of trust between users and software systems where the actions of the software systems are determined via pre-scribed policies or other autonomous mechanisms. In addition, no general tools are available that allow a policy system to earn a user's trust.