Floating point numbers comprise a mantissa part and an exponent part and are usually 32-bit (`single precision`) or 64-bit (`double precision`) in size. The formats of floating point numbers depend on the standard implemented. For example, the widely implemented IEEE standard has the format 1.fff.times.2.sup.e where 1.fff is the normalised mantissa and e is the exponent.
A floating point addition operation (which may be a subtract operation depending on the sign of the floating point number(s)) for two floating point numbers typically comprises the following steps:
1). Calculate the difference between the two exponents.
2). Shift the mantissa of the smaller operand right by the absolute value of the exponent difference.
3). Add the two mantissas.
4). Normalise the addition result--
a) Find the leading `one` (`1`) bit of the mantissa's addition result: (result of step 3). PA1 b) Shift the mantissa's addition result to the left to discard all the leading zeros until the leading `1` bit becomes the Most Significant Bit (MSB) of the result (in case of subtract operation). PA1 c) Encode the number of left shifts required to a binary value for updating the exponent result. PA1 d) Subtract the binary value from the exponent value in order to correct the result of the floating point operand.
5). Result rounding--If needed, add 1 to the result.
6). Calculate the exception flags according to the final result.
When implementing such a sequence in an adder, it is important that the delay in the critical path between the input and output of the adder is kept to a minimum. Generally, the `find leading one` step is performed serially. This serial execution, combined with the `shifting of the result step`, takes a significant amount of time in the critical path which may be unacceptable in some applications.
There is therefore a need to provide an improved mantissa addition system which performs fast normalisation of the mantissa adder's result.