This invention relates to the arithmetic unit of a digital computer and more particularly to apparatus for providing a significant reduction in floating point addition execution time by reducing the time required for performing shifting operations.
The arithmetic operations performed in digital computers may be fixed-point arithmetic commonly used for business data or statistical calculations or floating point arithmetic used mainly for scientific and engineering computations. In a digital computer design, the radix point is implied and does not occupy a physical location in a storage device. With fixed-point arithmetic, a radix point is located immediately to the right of the least significant digit place or located immediately to the right of the sign place before the first digit place. With floating-point arithmetic, a number is represented by a sign, a mantissa and an exponent where the mantissa may assume a fixed point notation and the exponent may be either a positive or negative integer.
One has to compare and equalize the exponents of two floating point numbers before they can be added or subtracted. Often a separate arithmetic unit is provided for handling exponent calculations concurrently with mantissa calculations in order to improve the speed of operation. In normalized floating point arithmetic operations, a floating point number is normalized if the most significant digit place of a mantissa contains a nonzero digit. Normalizing requires shifting the mantissa to the left which pushes off redundant leading zeros in the more significant digit places and the exponent is decreased accordingly until a nonzero appears in the most significant place. In normalized arithmetic, all the floating point numbers must be prenormalized before they can be manipulated. Therefore, after every intermediate computation step, renormalization procedures must be performed to ensure the integrity of the normalized form. Floating point number representations are described in many texts, one of which is "Computer Arithmetic: Principles, Architecture and Design", Kai Hwang, John Wiley and Sons, 1979.
Floating point addition requires shifting for operand alignment and result normalization. The number of shifts or shift count performed is limited only by the mantissa length and a particular rounding algorithm utilized. The maximum number of shifts performed in a binary number system is frequently equal to the number of mantissa bits plus one.
Computers comprising shifters limited to a shift count of a few bits at a time expend large amounts of time performing long shifting operations repetitively. Shifters with greater shifting range can accomplish an alignment or normalization shift in a single operation and generally exhibit substantially greater propagation delay than a single pass through a smaller shifter. In the prior art, floating point addition has been implemented in a manner whereby alignment and normalization are executed by one pass for each operation through a multiple digit shifter or by as many passes as required through a short shifter to accomplish a plurality of shifts.