Numeric types with large bit numbers handled on computers include single precision floating point number format (32 bits) and double precision floating point number format (64 bits), which are prescribed in IEEE 754. A single precision floating point number is formed of a sign of one bit, an exponent part of eight bits, and a mantissa part of 23 bits. A double precision floating point number is formed of a sign of one bit, an exponent part of eleven bits, and a mantissa part of 52 bits.
In recent years, since performance of CPUs has improved and cost of input and output with respect to disks has become a problem, an amount of information treatable in a single input and output operation with respect to a disk is desirably increased. For floating point numbers prescribed in IEEE 754 also, by compression with a known technique, an amount of information treatable in a single input and output operation with respect a disk is able to be increased (see Japanese Laid-open Patent Publication No. 08-129479, for example).
However, compression by a known technique has a problem that the same calculation as that before the compression is difficult due to the compressed signs.