Systems such as the Telcordia Technologies, Inc. ConfigAssure system described in Sanjai Narain, Vikram Kaul, Gary Levin, Sharad Malik. Declarative Infrastructure Configuration Synthesis and Debugging. Journal of Network Systems and Management, Special Issue on Security Configuration, eds. Ehab Al-Shaer, Charles Kalmanek, Felix Wu. Springer Verlag, 2008, which is incorporated herein by reference, have been developed to solve fundamental configuration problems, namely, specification, synthesis, diagnosis and repair. However, even if the final configuration is known, in general, all configuration parameters cannot be concurrently changed to their new values. The reconfiguration planning problem is computing the order in which these parameters should be changed so that a given invariant is never falsified during the transition. Compounding the hardness of this problem is the fact that no reconfiguration plan may exist for a given final configuration. Furthermore, sometimes parameters need to assume intermediate values, not just initial and final values, before a plan can be constructed.
In artificial intelligence (AI) research, a distinction has traditionally been made between planning, in which choices must be made about which actions to take, and scheduling, in which the set of actions to be performed is predetermined, but their sequence must be calculated. Most realistic problems, however, involve both planning and scheduling, so the two fields often overlap considerably.
Classical planning problems define an initial state, a goal, and a set of actions that may be performed. These actions specify a set of preconditions that must be true before the action may take place, and a set of effects that will be true after the action is completed. A solution to the problem is a sequence of actions to be performed such that the preconditions of each action are met and the effects of the final action make the goal true.
Because the number of possible sequences of actions in a planning problem becomes large very quickly, efficient techniques for solving planning problems have been the focus of much of the work in this area. The most obvious strategy is forward state space search (FSS), in which the planning algorithm begins with the initial state of the problem and picks an action whose preconditions are satisfied as the first action in the solution sequence. This process continues until the goal state is reached. Since the number of possible actions at any state is large, however, the sheer size of the state space often overwhelms the FSS technique unless domain-specific heuristics can be used to guide the search. Most classical planning work has employed goal-directed search, in which the planner works backwards from the goal state. Starting from the goal, an action is chosen that can accomplish the goal, and its preconditions are added to the new state as subgoals, which must in turn be accomplished by other actions. This process continues until the set of subgoals is a subset of the initial state. While goal-directed search can avoid the huge search space required in the forward search case, the space is still large enough that goal-directed planners can solve very few practical problems. More recent work in classical planning has focused on an algorithm named Graphplan Graphplan is a kind of reachability analysis that allows for a drastic reduction in the size of the search space. Starting from the initial state, Graphplan determines, using the actions provided, which conditions are possible (or reachable) after the first state, the second, and so on. This analysis allows the algorithm to determine the minimum size of a correct plan before actually computing it, and has been shown to significantly outperform other techniques.
Planning as satisfiability is a technique that converts a planning problem to a Boolean satisfiability (SAT) problem; the recent success of SAT solvers has allowed this strategy to approach the performance of Graphplan. Beginning with a very short planning length, the encoded SAT problem is solved; if it is unsatisfiable, the length is increased until it becomes satisfiable, at which time the propositional assignment can be translated to a plan.
Traditional scheduling problems are considered to be a special case of the planning problem in which the set of actions is predetermined, and in which it remains only to schedule these actions. In practice, scheduling problems overlap considerably with other areas of planning. Many scheduling problems, for example, involve the allocation of resources; choices about when and how to allocate these resources must be made by the scheduler. A more reasonable distinction between scheduling and planning, then, is that scheduling problems involve a small set of choices over a long and possibly complex schedule, while planning problems involve a possibly huge set of choices many of which may interact in complex ways—over a much shorter amount of time. Scheduling problems generally focus on ordering, while planning problems focus on choices.
AI research in scheduling tends towards solutions to the general scheduling problem, in contrast to operations research, which develops specialized techniques for specific classes of scheduling problems (e.g. flow-shop, job-shop, and sports scheduling).
The most common approach to solving the general scheduling problem is to represent it as a constraint satisfaction problem. Two main possibilities for this encoding have been explored: the assignment of a start time to each task, and the ordering of tasks without regard to concrete times. If the first option is taken, constraints representing the restrictions on resources and ordering become constraints on start times; in the case of the second option, they become constraints on the relative orderings of two actions. The latter approach is most commonly used in recent work, as it reduces the search space considerably.
Scheduling as satisfiability, like planning as satisfiability, has recently become a very efficient one for solving scheduling problems. While the limitations of boolean satisfiability make it difficult to represent arithmetic relations and functions, the speed of modern SAT solvers makes the translation to Boolean formulas an attractive option.
A simple, non-SAT-based reconfiguration planning algorithm was also developed and explored. It uses Prolog to set up the generation of all permutations of configuration variables. The set up is such that if the invariant is falsified by an initial segment of a permutation, then no permutation of the remaining segment is generated. The algorithm efficiently solved some problems of significantly larger size than did the SAT-based algorithm.
However, the performance of the Prolog-based algorithm is critically dependent on the order in which configuration variables are declared as Prolog facts. It is non-trivial for the specification writer to find an efficient ordering. The algorithm also does not address the problem of finding a final configuration for which a reconfiguration plan exists, neither does it allow configuration variables to take on intermediate values.
The algorithm is defined in just 12 lines of formatted Prolog code. The first rule below states that if the list of configuration variables is empty, then the current plan O is the final plan O. The second rule states that if the list T is non-empty, then non-deterministically remove a variable X from it to produce Tp, and check whether the invariant is true for X appended to the front of the current plan O. If so, then compute the plan for Tp with [X|O] as the current plan.
plan([ ], O, O).plan(T, O, Op) :-rem(X, T, Tp),invariant([X|O]),plan(Tp, [X|O], Op).
The next two rules remove an element x from the second argument and compute the remainder in the third argument.
rem(X, [X|R], R).rem(X, [A|R], [A|Rp]) :- rem(X, R, Rp).
The next two rules compute the value Y of a configuration variable x after a sequence of variables changes O. The first states that if x is a member of O, then Y is its final value. The second states that if not, then, Y is simply the initial value of X. It is assumed that initial and final values are defined by means of Prolog facts each of the form
initial_and_final_value(_, _, _).val(O, X, Y) :-member(X, O), !,initial_and_final_value(X, _, Y).val(_, X, Y) :-initial_and_final_value(X, Y, _).
For the route and tunnel set up example below, definitions of invariant and initial_and_final_value are as follows:
invariant(L):-or(not(val(L, route, 1)), val(L, tunnel, 1)).initial_and_final_value(route, 0, 1).initial_and_final_value(tunnel, 0, 1).
Now, the Prolog query plan ([ ], [tunnel, route], L) returns L=[route, tunnel] where variables are to be changed in reverse order.