The common and popular notion of interval arithmetic is based on the fundamental premise that intervals are sets of numbers, and that arithmetic operations can be performed on these sets. This interpretation of interval arithmetic was initially advanced by Ramon Moore in 1957 and has been recently promoted and developed by interval researchers such as Eldon Hansen, William Walster, Guy Steele and Luc Jaulin. This is the so-called “classical” interval arithmetic, and it is purely set-theoretical in nature.
A set-theoretical interval is a compact set of real numbers [a,b] such that a≦b. The classical interval arithmetic operations of addition, subtraction, multiplication and division combine two interval operands to produce an interval result such that every arithmetical combination of numbers belonging to the operands is contained in the interval result. This leads to programming formulae made famous by classical interval analysis which are discussed at length in the interval literature.
In 2001, Miguel Sainz and other members of the SIGLA/X group at the University of Girona, Spain, introduced a new branch of interval mathematics known as “modal intervals.” Unlike the classical view of an interval as a compact set of real numbers, the new modal mathematics considers an interval to be a quantified set of real numbers. A modal interval is comprised of a binary quantifier and a set-theoretical interval. Therefore, if Q is a quantifier and X′ is a purely set-theoretical interval, then X=(Q, X′) is a modal interval. For this reason, modal intervals are a true superset of the classical set-theoretical intervals.
Recent advances in modal interval hardware design, as described in Applicant's published application WO 2006/017996 A2 entitled “Modal Interval Processor”, and further pending application serial nos. PCT/US06/38578, entitled “Reliable and Efficient Computation of Modal Interval Arithmetic Operations,” and PCT/US06/38507, entitled “Computing Narrow Bounds on a Modal Interval Polynomial Function,” each of which are incorporated herein by reference, provide a reliable and high-performance foundation for interval arithmetic applications.
A specific example of such an application is the field of computer graphics. In Applicant's published application WO 2006/115716 A2, entitled “System and Method of Visible Surface Determination in Computer Graphics Using Interval Analysis,” incorporated herein by reference, a novel system and method of visible surface determination in computer graphics using interval arithmetic and interval analysis is provided. By abandoning traditional techniques based on point-sampling and other heuristic methods, an entirely new and robust approach is employed wherein rigorous error bounds on integrated digital scene information is computed by a recursive and deterministic branch-and-bound process of interval arithmetic subdivision. To render an image, interval arithmetic solvers capable of solving highly nonlinear systems of equations provide a robust mechanism for rendering geometry such as non-uniform rational B-splines (NURBS) and transcendental surfaces directly into anti-aliased pixels, without the need to tessellate the underlying surface into millions of tiny, pixel-sized micropolygons. As a consequence of this approach, wide intervals representing unknown parametric variables are successively contracted, resulting in a narrowing of the uncertainty of the unknown values so as to optimally “match” their contribution to the area and/or intensity of a pixel and/or sub-pixel area before being input to an interval shading function in furtherance of assigning a quality or character to a pixel.
As depicted on the cover of the book “Applied Interval Analysis,” Luc Jaulin, et. al., Springer Verlag, 2001, which is incorporated herein by reference, the established method of performing a branch-and-bound interval analysis is to split the “problem” or parameter domain into a regular paving, that is, a set of non-overlapping interval boxes. At each subdivision stage, the interval is bisected at the midpoint to produce two smaller intervals of equal width. As the present invention will demonstrate, this is not always the ideal approach. In many applications of interval analysis, splitting at the midpoint introduces a constant bounded error which produces undesirable results.
Another problem of current branch-and-bound interval analysis methods is the well-known “curse of dimensionality,” i.e., an exponential increase in computation time. This is a consequence of the “divide and conquer” nature of interval analysis in which interval arithmetic calculations are performed over interval domains which are recursively split into smaller and smaller sub-domains until termination criteria is reached, or proof of containment is ascertained. In the prior art, heuristic point-sampling methods such as Monte Carlo and stochastic undersampling are used when the number of dimensions is high and the problem to be solved is difficult. A classical example can be found in the paper “Spectrally Optimal Sampling for Distribution Ray Tracing,” Mitchell, Don, Computer Graphics 25.4, 1991, which is incorporated herein by reference. The result is a significant reduction in computation time. Similarly, undersampling appears to offer interval analysis a tantalizing solution to the “curse of dimensionality” problem, but a method for doing this in a robust manner seems unclear and not obvious. It begs an answer to the question: is it even possible to undersample a solution when robust interval analysis methods are used?