1. Field of the Invention
The present invention relates to techniques for performing interval computations within computer systems. More specifically, the present invention relates to a method and an apparatus for initializing interval computations through subdomain sampling within computer systems.
2. Related Art
Rapid advances in computing technology make it possible to perform trillions of computational operations each second. This tremendous computational speed makes it practical to perform computationally intensive tasks as diverse as predicting the weather and optimizing the design of an aircraft engine. Such computational tasks are typically performed using machine-representable floating-point numbers to approximate values of real numbers. (For example, see the Institute of Electrical and Electronics Engineers (IEEE) standard 754 for binary floating-point numbers.)
In spite of their limitations, floating-point numbers are generally used to perform most numerical computations.
One limitation is that machine-representable floating-point numbers have a fixed-size word length, which limits their accuracy. Note that a floating-point number is typically encoded using a 32, 64 or 128-bit binary number, which means that there are only 232, 264 or 2128 possible symbols that can be used to specify a floating-point number. Hence, most real number values can only be approximated with a corresponding floating-point number. This creates estimation errors that can be magnified through even a few computations, thereby adversely affecting the accuracy of a computation.
A related limitation is that floating-point numbers contain no information about their accuracy. Most measured data values include some amount of error that arises from the measurement process itself. This error can often be quantified as an accuracy parameter, which can subsequently be used to determine the accuracy of a computation. However, floating-point numbers are not designed to keep track of accuracy information, whether from input data measurement errors or machine rounding errors. Hence, it is not possible to determine the accuracy of a computation by merely examining the floating-point number that results from the computation.
Interval computation represents a new paradigm in the computing technology, which has been developed to solve the above-described problems. Specifically, interval computation represents numbers as intervals specified by a first (left) endpoint and a second (right) endpoint. For example, the interval [a,b], where a<b, is a closed, bounded subset of the real numbers, R, which includes a and b as well as all real numbers between a and b. Arithmetic operations on interval operands (interval arithmetic) are defined so that interval results always contain the entire set of possible values. The result is a mathematical system for rigorously bounding numerical errors from all sources, including measurement data errors, machine rounding errors and their interactions. (Note that the first endpoint normally contains the “infimum”, which is the largest number that is less than or equal to each of a given set of real numbers. Similarly, the second endpoint normally contains the “supremum”, which is the smallest number that is greater than or equal to each of the given set of real numbers. Also note that the infimum and the supremum can be represented by floating-point numbers.)
Typically, interval methods solve function evaluation problems (e.g., optimization, rooting finding, etc.) using a “branch-and-prune” technique. This technique begins by dividing the domain of interest into a number of larger subdomains. Next, a variety of interval techniques are used to eliminate subdomains which can be proven to contain no solutions. The remaining subdomains are then subdivided into smaller subdomains and the process repeats recursively until user-specified tolerances of interval widths are met on all remaining subdomain boxes (which contain solutions). This technique is guaranteed to produce bounds which contain the solutions to the problems.
Unfortunately, the branch-and-prune technique has some drawbacks. In particular, we note that the efficiency of the interval computation depends on the ability of these interval techniques to delete or contract subdomain boxes. However, a particular interval technique typically becomes effective only when the domain of interest has been sufficiently subdivided into smaller boxes, wherein the interval technique starts to generate “tight bounds” which contain solutions. In other words, at larger subdomain scales during the branch-and-prune process, the interval technique may not generate any useful information on the subdomains being evaluated to eliminate or contract those subdomains. Consequently, the associated computations simply consume computation resources.
Furthermore, performing branch-and-prune function evaluations on those large subdomain boxes which do not contain solutions can be extremely time-consuming, because the interval techniques perform exhaustive searches for solutions until an entire subdomain space has been examined.
Note that the above-described problems are inherent to interval computations which can significantly degrade efficiency during the function evaluation process.
Hence what is needed is method and apparatus that facilitates more efficient interval function evaluations without the above-described problems.