Data processing operations often use numbers which can be represented in various formats. For example, digital signal processing (DSP) operations are often specified in mathematical terms using real numbers. For implementation, these real numbers are approximately represented in hardware or software in a format that has more limited range and accuracy. For example, if a number is represented as a fixed point value (as opposed to a floating point value) in binary, the number may be represented in a 16.8 format (16 bit word length with 8 bits to the right of the binary point) or in a 32.8 format, etc. Other representations such as floating point and logarithm based formats have similar tradeoffs between the size of the representation and the range and precision of the numbers that can be represented. Depending on the application, a designer of the logic for the data processing operation can reduce the size of the representation (e.g. reduce the format of the representation from 16.8 to 12.4, etc.) and thereby improve performance (e.g. in speed) and reduce implementation cost (e.g. in die area consumed in an IC). These reductions are specified as type declarations for variables in languages like System-C or as format constraints either embedded in the language or in an external constraint file for other languages. In Matlab for example, the representation of numbers is constrained through the use of calls to quantizer functions which take as input the unquantized value and parameters specifying the desired format and conversion method. The result of such a function call is the desired approximation of the input value. Quantizers may use rounding, saturation and truncation to reduce the size of the number representation. Reduced versions of floating point numbers are also possible which use fewer bits for mantissa and exponent and may also use a higher radix for the exponent.
Often, a designer may use an existing tool to analyze the quality of formats and to also choose a desired size format that meets accuracy needs, and this desired size format is treated as an exact specification of the data to be stored in the declared variable FIG. 1 shows an example of a method in the prior art for using a format directive or type declaration when designing logic, such as an IC or software. In this method, a system (e.g. a computer-aided design system driven by software) receives, in operation 101, a directive specifying one or more formats, such as the size format of data in certain data paths (e.g. inputs or outputs, etc.). The system then treats, in operation 103, the directive as a exact format during the data processing operations; for example, if the format is 16.16 for all outputs, then the largest size output never exceeds the 16.16 size and this may require saturation or rounding or truncation in the logic to limit outputs to this size (and hence be within the constraint of the target format). In operation 105, the system creates a representation of logic (e.g. a netlist in RTL or compiled software) to perform the data processing operations, where the representation is limited by the constraint of the format. This representation can be used to build logic to perform the data processing operations subject to the constraint. The use of these format directives may be considered a form of word length optimizations, which may improve the performance and area consumption of DSP implementations in software or hardware implementations by constraining word sizes (e.g. number of bits) used to represent values. Operations on smaller word sizes are often faster and smaller, so constraining the word sizes, through a format directive, does sometimes result in a smaller and faster implementation.