Multiplication is a fundamental arithmetic operation one can do with pen and paper and with computer. How to do it fast and with high accuracy has been a subject of intense research in the art of computer science and engineering.
Multiplication involves two operands—the multiplicand and the multiplier. Traditionally one performs multiplication in three steps: first one multiplies the digits of the multiplier sequentially to the multiplicand digit by digit to generate partial products. Next, one aligns the partial products by properly “shifts” them according to the position of the digits in the multiplier. Finally one “adds” the partial products to arrive at the final product.
Pen-on-paper is viable when the operands are simple, but it becomes only practical to use a computer or other electronic computational devices when they are not, especially when calculation speed and accuracy are important.
Even though the “add and shift” algorithm is straight forward, its implementation in a traditional electronic computer may involve a fair amount of hardware components and still takes lengthy machine time to execute the necessary steps when the operands are non-trivial such s irrational numbers and when high accuracy is required.
Computer scientists and engineers have endeavored to speed up the operation. For example, Andrew Donald Booth published an important work in 1951 directed to a multiplication algorithm suitable for machine implementation. It has been followed and expanded ever since. The following is a brief account of the Booth's algorithm commonly known as Booth 2 is presented below for illustrative purposes.
First, the multiplier is partitioned and decoded into overlapping groups of 3-bit binary numbers, which may be stored in a memory unit. When the multiplicand then arrives at the computing unit, it is multiplied by each of the 3-bit multiplier groups in succession and the resulting partial products are stored, for example, also in a memory unit. All partial products then go through the “shifted and aligned” in a binary adder and are manipulated there to arrive at the final product of the multiplication.
Comparing to the rudimentary digit-by-digit approach, the Booth 2 method reduces the number of partial products by almost, a half, or more precisely, from n to (n+2)/2, where n is the multiplier in binary bits. Other versions of the Booth's algorithm, such as Booth 3, Booth 4, and Redundant Booth are known in the art. These successively sophisticated algorithms incrementally improve the speed of multiplication.
In 1991 Wolf-Ekkehard Blanz et al. of IBM proposed a method for multiplying an N bit number X by an M bit number C. With this method, the N bit number is partitioned into K non-overlapping bit groups. Each bit group functions as an address for accessing a look-up-table (LUT). The values from the LUT represent a sum of a constant and the product of the M bit number C and the binary value of the bit group to which the LUT corresponds. The values are added together after bit shifted in accordance with their relative priorities in an adder until to arrive at a single result, which is the (N+M) bit product of C and X. Many later works adopted the LUT approach with further improvements.