In the past, a common method of producing the square root has been to use a restoring algorithm which in each iteration step produces one new bit. This is disadvantageous for floating point representations with a large number of significant bits because the number of cycles is slightly greater than the number of bits. Even with more complex circuitry, the restoring methods take a number of cycles proportional to the number of significant bits plus a small overhead.
Also, known quadratic iterations and known cubic iterations have been used as the basis for approximate square root calculations. As discussed herein, there is sufficient inherent inaccuracy and roundoff error that these cannot give a satisfactory starting value without some means of achieving extra precision. In the past, this has meant that either additional complex circuitry was needed, or laborious calculations after a set of iterations was needed to find the rounded square root result for a given rounding mode. The circuitry needed for implementing the restoring algorithm is dissimilar to that needed for the quadratic or cubic iterations, and the transition from one method to the other is slow.
In the past, Newton-Raphson iterations based on improving the square root rather than the inverse square root have been used. These types of methods have two problems: they involve a division operation which is significantly slower on many data processors than multiplication or addition/subtraction operations. In addition, there is not enough accuracy preserved in carrying out the calculation to get the right result without having to perform a number of result-alteration tests. There exists an algorithm which does these calculations sufficiently accurately but it requires significant additional hardware capable of multiplying two numbers to form a product and adding the product to a third number to form an exact sum which is then rounded. This significant additional hardware increases a cost of the product, which is definitely disadvantageous.