1. Field of the Invention
This invention relates to data processing systems which include computing devices, and more particularly to an improved arithmetic system for providing programmable data reformatting in both fixed and float-point computations.
2. Description of the Prior Art
Many data processing systems provide systems that allow field selection within data words. The operands that result from the field selection are often a fixed fraction of the number of bits that comprise a full data word. For example, in one rather early binary data processing system having a 36-bit data word capacity, the system was arranged for functioning with the full data word, the lower order 18-bits, or the higher ordered 18-bits. More contemporary systems have been designed to function with operands comprising bit fields selected from the data word that, in addition to the one-half word fields, include one-third word selection, and one-quarter word selection. In the prior art data processing systems, these fields of data bits selected from within a particular data word are addressed in a variety of ways for the bit field selection.
In addition to selecting the bit field for use, it is also known to be desirable to position the selected bit field to some predetermined position with respect to the binary point. For example it is common to reposition selected bit fields so that the least significant bit in the selected bit field is aligned with the least significant bit position in a holding register, or as it is applied to an arithmetic operation. In written notation, it is common for the least significant digit to be positioned at the right, with higher ordered significant digits progressing to the left. This system of notation carried over into the arrangement of computing devices. It is common for the least significant digit to be positioned at the right with higher ordered digit positions progressing to the left. For binary data processing systems, this results in the binary point for fixed point arithmetic being considered to be to the right of the least significant digit. In fractional machines, or in floating-point calculation where the mantissa operand is treated as a fraction, it is common for the bit position order to progress from least significant at the right, to most significant at the left, but to provide that the binary point is to the left of the most significant fractional digit. As a part of the field selection process, then, it was recognized to be necessary to align the selected field in some predetermined manner with relationship to the binary point.
Various computational requirements have given rise to the option of sign filling up to a specified word size when fractional word fields are selected. This involves testing the appropriate sign bit position, and filling all unselected bit positions with the representation of the tested sign.
Many computational sequences have the need for presenting various operands in magnitude form. The magnitude is determined by testing the sign of the operand, and if found to be negative, complementing the operand. If the sign indicates the magnitude is positive, it represents the magnitude and is utilized unaltered.
In systems that provide floating-point arithmetic computations, the system must provide the capability of unpacking the floating-point operands, that is, separating the charateristic portion from the mantissa portion. The charateristic is the exponent, and the mantissa is the fractional operand. In the various floating-point operations, at various times it is necessary to provide the capability of sign filling the mantissa, in conjunction with the alignment of the operands. It is also necessary to have the capability of presenting the operands in magnitude form for processing. Of course when the computations are completed, it is necessary to again align and repack the characteristic and mantissa to form the floating-point resultant operand.
The bit capacity of registers in the data processing system often relate to the number of bit positions in the memory registers. Operands in the floating-point format that are contained within the number of bit positions of a memory register capacity are often referred to as single precision floating-point operands. The limitation of the number of bit positions to a single register obviously places limitations on the capacity and precision of the arithmetic manipulations. In order to increase the capacity of the floating-point operands, systems have been developed that utilize two full operands to comprise a single floating-point operand. This effectively doubles the bit capacity, and is commonly referred to as double precision floating-point operation. In the double precision format, the characteristic often times utilizes more bit positions than would be utilized for the single precision format. In computing systems that utilize both single precision and double precision format, systems have been devised for converting floating-point operands between the two systems of representation. For those systems that utilize a different number of bit positions to represent the characteristics between single precision and double precision, it is necessary that the conversion between formats provide for adjustment of the characteristic representation. Further, it is necessary that there be adjustments of the mantissa when the conversion is from double precision to single precision format. It is common to require that the number of characteristic bits be reduced, and that certain bit positions in the mantissa be dropped. During the converse conversion, the number of bit positions of the characteristic is increased, and the additional number of bit positions of the mantissa is made available.
Both the characteristic and mantissa for floating-point arithmetic operations, whether they be single or double precision, may represent positive or negative values. The sign bit referenced represents the sign of the mantissa. To avoid using two separate sign designations, that is, one for the characteristic and one for the mantissa, within the same operand, a system of characteristic biasing has been developed to indicate the sign of the characteristic. For example, a single precision floating-point operand that provides for an 8-bit characteristic, can express numerical values ranging from 0 through octal 377. By arbitrarily applying a bias of octal 200 to the actual characteristic, the zero point is effectively shifted and permits the numerical representation of minus octal 200 through octal 177. In this manner, the value of the characteristic indicates whether it is positive or negative, with those characteristic values having a numerical value of octal 200 or less, representing negative characteristic values. A similar biasing system is applied to double precision characteristics, with the same purpose. For example if an 11-bit characteristic is utilized, a bias of octal 2000 establishes a mid-point with numerical values of octal 2000 or less being of a negative value and characteristic values of more than octal 2000 being a positive value. It can be seen, of course, that when converting between a single and double precision format, the biasing as well as the bit capacity must be adjusted.
In performing conversion from double precision floating-point to single precision floating-point, care must be taken to establish that the magnitude of the double precision characteristic can be expressed in a number of bit positions available in the single precision format. In the event that a double precision floating-point characteristic has a numerical value greater than the upper positive range of the single precision floating-point characteristic, an overflow fault will occur, and an indication of this failure should be provided. Similarly, a double precision floating-point characteristic on the lower extremity of the range that extends beyond the bit capacity of the single precision floating-point operand cannot be accurately converted, and will cause an underflow fault to occur. The characteristic biasing system, the conversion from double precision floating-point to single precision floating-point and for conversion from single precision floating-point to double precision floating-point is described in detail in U.S. Pat. No. 3,389,379 to G. J. Erickson, et al.
It is often necessary to provide constant operands and to select the required format. This can be considered as a part of operand formatting and has usually been accomplished with specialized circuitry.
Prior art systems do not uniformly provide for through checking of accuracy.
The data formatting functions must occur before the actual arithmetic operations can proceed. Accordingly, the data formatting is very time critical, and must be accomplished at the highest rates possible while maintaining accuracy, so that the rate of throughput of data will not be unnecessarily decreased.
The various data formatting operations described have been accomplished by separate circuits and control arrangements with corresponding duplication of circuits in many instances. This has the attendant increase in cost. The separate approaches do not necessarily minimize time of formatting and in many cases do not provide for optimum throughput. The separate circuits used in prior art formatting systems are subject to failure, and the large number of such circuits are a source of degradation of system reliability.