1. Field of the Invention
The present invention relates generally to computer systems for solving mathematical problems and, more specifically, to a system and method for testing whether a result is correctly rounded.
2. Description of the Related Art
A typical modern computer system includes one or more processor cores. Each core has a floating-point unit and/or may execute software to perform arithmetic operations, including floating-point addition, subtraction, multiplication, division, and square root. Processor cores implement a system of digits, typically binary or hexadecimal (base-16) to represent numbers. In these systems, some multiplicative inverse or division calculations, for example 3−1 or ⅓, cannot be expressed exactly, as the exact result would contain an infinite number of bits or a greater number of bits than can be stored in a floating-point variable, e.g., ⅓=0.333333 . . . in decimal form or 0.010101 . . . in binary form. These values therefore must be expressed as rounded to a certain limit of significance, such that the values can be expressed in a finite number of bits. The finite number of bits used by a processor core to represent a floating-point variable is referred to as the processor core's precision. A processor core may support more than one floating-point format. The accuracy of a floating-point value refers to how close the representation of a numeric value is to an infinitely precise representation. Skilled persons in the art are aware that, for modern processor cores, the standard for rounding multiplicative inverse, division, and square root calculations is set forth in IEEE Standard 754 for Binary Floating-Point Arithmetic developed by the Institute of Electrical and Electronics Engineers.
As is well-known, a floating-point number is represented as a sign (string of digits representing a plus or minus), a mantissa (string of digits representing the number without an exponent), and an exponent. The value of the floating-point number is determined by taking the product of the sign, the mantissa, and a base raised to the power of the value represented in the exponent field. The total space allocated for representing a floating-point number can be, for example 32 bits, for single precision, or 64 bits, for double precision.
One set of techniques known in the art for estimating the result of multiplicative inverse, division, and square root calculations involves convergence. Examples of a convergence techniques are the “long division” technique where one bit or one digit at a time is calculated, or Newton-Raphson techniques where a set of digits is calculated at a time. In other words, the technique requires that the processor estimates the next digits in the result only after the current digit has been determined. Convergence techniques are conventionally implemented for a worst case scenario optimization where the eventual convergence always guarantees that the “correct” result is obtained. In such an approach, the correct result, according to IEEE Standard 754, is oftentimes obtained as an “intermediate result” prior to the completion of the calculation. For example, one convergence technique calculates a result, y, of a multiplicative inverse problem, 1/b, by calculating three intermediate results y1, y2, and y3, before arriving at the final result y4=y, which is guaranteed to be correct according to IEEE Standard 754. Each successive intermediate result is closer to the correct value than the previous result, i.e., y2 is closer to the correct result than y1. However, oftentimes, the correct result may be reached in fewer than four steps, i.e., y3=y4=y. In this case, y4 would still be calculated from y3, as there is no guarantee that y3 is the correctly rounded result. As is well-known, exceptions are raised when a result of a calculation is positive or negative infinity or not a number (NAN). In conventional processors, a determination as to whether an exception would be raised is made early in the calculation process so as to avoid making a calculation the result of which cannot be expressed as a floating-point number. If an exception would be raised, then an exception handler is invoked.
One drawback of such an approach is that processor cores implementing the convergence method for multiplicative inverse, division, and square root calculations often waste processing time and resources on unnecessary calculations. For example, processing time and resources are wasted if an intermediate result, which is being refined by the convergence method, is already equal to the correct result according to IEEE Standard 754. Another drawback of such an approach is that processing time and resources are wasted to determine whether a calculation raises an exception, where such exceptions occur relatively infrequently.
As the foregoing illustrates, what is needed in the art is a more efficient technique for determining whether the result of such computations conforms to conventional IEEE floating-point standards.