In some categories of digital calculation, such as signal analysis for speech recognition, it is useful to be able to deal with numbers having a wide dynamic range and yet still perform calculations at high speed. Simple integer arithmetic is fast but the dynamic range is limited unless a very large word can be processed.
Floating point arithmetic has evolved in the prior art in order to more efficiently handle a wide dynamic number range with a word length of manageable size. In a floating point system the word is divided into two parts. The first or exponent part comprises perhaps five bits and represents the power to which one raises two in order to get the approximate number. The dynamic range of numbers that can be represented thus extends from zero all the way through two raised to the thirty-second power, a very wide range indeed. This first exponent part is then multiplied by a second or fractional part comprising perhaps eleven bits in order to fully define the number. The fractional part, or fraction, is normalized so that it always lies within a limited range of values with the highest value being twice the lowest value, in keeping with the doubling of the number upon each increment of the exponent part. Usually a range of 0.5 to 1 is used. During calculations, if the fraction overflows or underflows the desired range, the exponent is simply incremented or decremented, as needed, and the fraction is shifted in the direction required to keep it normalized within the limited range. Hence, a sixteen bit word will suffice to represent a dynamic range of numbers much greater than would otherwise be possible.
The drawback of floating point arithmetic is that it may be too slow for applications such as digital speech analysis and recognition, an application which requires both speed and wide dynamic range. A floating point multiplication, for example, requires that the exponents be added together to establish the exponent of the product. The fractions are then multiplied together and shifted as needed to normalize the result. The product's exponent is then adjusted in accordance with the number of shifts. All this is very time consuming especially considering how long it takes to simply multiply the two eleven bit fractions. My invention achieves great speed increases by converting the floating point number into a logarithm so that a multiplication or division of numbers may be achieved by the simple addition or subtraction of their logarithms.