Processing units such as central processing units (CPUs) and graphics processing units (GPUs) are designed to perform arithmetic operations that conform to a specified numeric representation. One common numeric representation is a floating-point number, which typically includes a mantissa field, an exponent field, and a sign field. For example, a floating-point number format specified by the Institute of Electrical and Electronics Engineers (IEEE®) is thirty-two bits in size and includes twenty-three mantissa bits, eight exponent bits, and one sign bit. A sixteen bit floating-point format includes ten mantissa bits, five exponent bits, and one sign bit. Floating-point arithmetic circuits configured to implement arithmetic operations on floating-point numbers must properly process one or more input floating-point numbers and generate an arithmetically correct floating-point result.
A floating-point multiply/add unit that is configured to perform thirty-two bit floating-point operations may be used to perform sixteen bit floating-point operations by padding the sixteen bit exponent and mantissa with zeros. However, performing the sixteen bit floating-point operations is not an efficient use of the logic circuits that are designed to perform thirty-two bit floating-point operations. Thus, there is a need for improving the processing efficiency when thirty-two bit floating-point arithmetic logic circuits are used to perform sixteen bit floating-point arithmetic operations and/or addressing other issues associated with the prior art.