A significant objective of program analysis and verification is to determine that the software performs its intended functions correctly, to ensure that it performs no unintended functions, and to provide information about its quality and reliability. Theorem proving in the form of constraint solving is an important application for software analysis and verification. Many program analysis and verification tasks involve checking the satisfiability of a set of integer linear arithmetic constraints. These constraints appear naturally during reasoning about arrays and scalar counters in programs. Moreover, these queries usually contain a mixture of other theories (e.g., uninterpreted functions) and quantifiers, in addition to linear arithmetic constraints.
As the problem of checking the satisfiability of a set of integer linear arithmetic constraints is NP-complete, there is a practical need to develop algorithms that effectively solve the instances of the problem that occur in practice. These algorithms should also be designed to operate in a setting where the queries involve a combination of ground theories, and possibly quantifiers.
In most verification benchmarks, the linear arithmetic constraints are dominated by simple difference constraints of the form x≦y+c, where x and y are variables and c is a constant. These constraints naturally arise in many applications. For instance, the array bounds' check in a program and the timing constraints in job scheduling can be specified as difference constraints. It has also been observed that many of the arithmetic constraints that arise in verification or program analysis mostly comprise difference constraints.
Efficient polynomial algorithms exist for solving the satisfiability of difference constraints. For a given theory T, a decision procedure for T checks if a formula φ in the theory is satisfiable, i.e., it is possible to assign values to the symbols in φ that are consistent with T, such that φ evaluates to true. However, these algorithms become incomplete with the presence of even a very few non-difference constraints. A set of linear arithmetic constraints may be defined as sparse linear arithmetic (“SLA”) constraints, when the fraction of non-difference constraints is very small compared to the fraction of difference constraints. SLA constraints make it difficult to exploit the efficiency of difference constraint solvers for solving many program analysis queries. At present, there is a need for efficient algorithms for solving SLA constraints.
Moreover, decision procedures currently do not operate in isolation, but form a part of a more complex system that can decide formulas involving symbols shared across multiple theories. In such a setting, a decision procedure has to support the following operations efficiently:                (i) Satisfiability Checking: Checking if a formula φ is satisfiable in the theory;        (ii) Model Generation: If a formula in the theory is satisfiable, find values for the symbols that appear in the theory that makes it satisfiable. This is important for applications that use theorem provers for test-case generation;        (iii) Equality Generation: The Nelson-Oppen framework for combining decision procedures requires that each theory produces the set of equalities over variables that are implied by the constraints;        (iv) Proof Generation: Proof generation can be used to certify the output of a theorem prover. Proofs are also used to construct conflict clauses efficiently in a lazy SAT-based theorem proving architecture.        
A set of general linear arithmetic constraints can be solved using the Simplex algorithm. The Simplex algorithm is a known technique for numerical solution of the linear programming problem. The method uses the concept of a simplex, which is a polytope of N+1 vertices in N dimensions, e.g., a line segment on a line, a triangle on a plane, a tetrahedron in three-dimensional space and so forth. Details relating to the Simplex algorithm are set forth in G. Dantzig, “Linear programming and extensions,” Princeton University Press, Princeton N.J., 1963, which publication is incorporated by reference herein in its entirety.
Under the theory of difference constraints defined above, the atomic formula may be expressed as x1−x2 c, where x1, x2 are variables, c is a constant and  is a placeholder for an operand {≦, ≧, =}. Constraints of the forms x c may be converted to the above form by introducing a special vertex xorig to denote the origin, and expressing the constraint as x−xorig c. The resultant system of difference constraints is equisatisfiable with the original set of constraints. Moreover, if ρ satisfies the resultant set of difference constraints, then a satisfying assignment ρ′ to the original set of constraints (that include x c constraints) can be obtained by assigning ρ′(x)≐ρ(x)−ρ(xorig), for each variable. A set of difference constraints (both over integers and rationals) can be decided in polynomial time using negative cycle detection (“NCD”) algorithms. These NCD algorithms detect negative weight cycles in a directed graph.
Thus, algorithms exist for solving difference constraints using linear space. However, these solutions cannot be used to solve general linear arithmetic including both difference constraints and non-difference constraints.