A floating point number is defined by: ##STR1##
The mantissa is expressed as an n bit positive, illustrated herein by a 24 bit positive number with 0, 1, or 2 bits to the left of the "binary point" or a leading zero to the left of the "binary point" and 0, 1, or 2 leading zeroes to the right of the "binary point." One bit expresses the sign.
In the arithmetic operations of addition and subtraction, it is necessary to align the "binary points" of the binary numbers before addition or subtraction. This is to get the exponents equal. This can be accomplished by shifting the number with the smaller exponent [Exponent.sub.i -Exponent.sub.j ] places to the right. Then the mantissas are added together. In the case of subtraction, the 2's complement of one number is added to the other number. The resulting sum is then normalized by shifting the mantissa to the left or right until the most significant bit is 1, and adjusting the exponent. This is called "normalization."
The IEEE Standard for Binary Floating Point Arithmetic (IEEE 754-1985) requires, as a minimum, normalization followed by rounding of the infinitely precise result. When rounding results in a "carry out" rounding must be performed again. Serial implementation or "normalize-round-normalize" is a frequent bottleneck in floating point digital signal processing engines.