Many computer systems are required to process large amounts of image or bitmap data. Such computer systems are frequently operating with restricted memory resources, and as such, these computer systems often compress bitmap data to reduce the amount of memory required during processing of the bitmap data.
There are many conventional compression methods that may be applied to bitmap data. The compression method used for particular bitmap data often depends on the attributes of the bitmap data. Some conventional image compression methods use parameters to control how aggressively to compress particular bitmap data. The more aggressive a compression method, the more information is lost to achieve a desired compression ratio. For example, the Joint Photographic Expert Group (JPEG) compression method uses quantisation tables to control how much perceptual information is lost during the compression of a given image.
Typically, the size of uncompressed bitmap data representing an image is used to determine whether the image will be compressed, and if so, to select a compression method and associated parameters to be used to compress the image. If not specified, the size of uncompressed bitmap data representing an image may be determined using the dimensions of the image (i.e., width and height) in pixels and the number of bits per pixel (bpp). The number of bits per pixel is often referred to as the colour depth or bit depth.
Other image attributes may be used in determining which image compression method to use to compress bitmap data representing an image. Such other image attributes include the type of content (e.g., line art or photographic), rendering target (e.g. screen or printer), and rendering colour space. However, in many instances such other image attributes are not known at the time of compression of particular bitmap data. For example, a streamed data interface often provides very little information about an image represented by streamed bitmap data prior to transferring the bitmap data itself.
A streamed data interface is one in which each individual piece of a particular portion of streamed data is presented to the interface once and only once. Data is presented to a streamed data interface in an order defined either by external or otherwise predetermined constraints. As a result, a computer system receiving bitmap data via such a streamed data interface cannot look ahead in a data stream. Such a computer system must accept data as data is received.
In addition, most streamed data interfaces cannot re-send data once the data has been transferred. An example of such a streamed data interface is a network connection between a desktop personal computer (PC) and a printer, in which the printer receives streamed printing commands and rendering data from the personal computer. However, such streamed data interfaces typically provide width, height and bit depth values for an image prior to transferring bitmap data representing the image. The provision of such width, height and bit depth values allow a computer system receiving such bitmap data to choose a compression method based on the size of the uncompressed bitmap data, and to compress data accordingly as the data is received.
A further difficulty is encountered when determining the compression method parameters if the bitmap data needs to be scaled during the final page render. Depending on the method of interpolation used in final render the ‘quality’ of the final image can vary. This difficulty is compounded if the bitmap data representing an image is further reduced in quality during job generation and there are multiple images on the same page.
In one known data compression method, after having been compressed by a first compression scheme that provides quantized transform coefficients, digital information is partially decompressed to recover transform coefficients as the transform coefficients were prior to quantizing. The transform coefficients are then re-quantized at a different compression level. The previously compressed digital information is only partially decompressed and re-quantized to modify its compression level. However, such a method is inefficient due to the need to decompress and then recompress data.
In one known storage method, images are stored (predominantly in a camera) so that they can be dynamically reduced in quality to free space for further images to be stored. Thus, in such a method, there is a trade off between number of stored images and quality. Images are encoded so the images can be truncated at arbitrary locations and the size and hence quality of an image can be reduced without needing to decode and re-encode. A user can select (on an image by image basis) a level to which the image can be truncated. Non-selected images are then truncated to make room for further images. A second threshold can also be set to ensure images retain a minimum level of quality. However, quality of truncated images is inconsistent.
A method of recovering memory in a rendering system with limited memory resources, is also known. In this method, an image compressor stores image data in a series of layers, each layer being responsible for a certain level of visual quality. Compression can be increased efficiently, without re-compression, by discarding the layers containing the least visually significant information. Images are initially stored at maximum quality and are gradually degraded as necessary releasing enough memory for the rendering system to function, and only up to a maximum acceptable compression level in order to prevent severe visual degradation. In accordance with this known memory recovery method, in a system where several display lists are maintained at any one time, a compression level value is associated with each of them. Image data is removed from image(s) from display lists with lowest compression level value. All images in a display list are compressed to the compression level of the display list, expect for images shared between display lists in which case, the higher compression level applies to that image.