1. Field of the Invention
The present invention generally concerns digital data compression and decompression, particularly the compression and decompression of digital data representing images, and still more particularly color images.
The present invention particularly concerns the compression and decompression of digital image, and color image, data in the computer and/or color printing, and/or in other transmission(s), of images so that (i) bandwidth between the computer and the printer (or other image recipient) may be conserved, while (ii) decompression of the compressed image is sequential, image side to image side (e.g., image top to image bottom), thus (iii) minimizing the memory requirement for an image data buffer in the printer (or other image recipient).
2. Description of the Prior Art
2.1 The Prior Art of Shapiro, of Pearlman, and of Teng and Neuhoff
The present invention will be seen to concern the compressing of digitized image data, particularly as may represent a color image. The disclosed technique of the invention will be seen to have the properties of (i) being efficient in the (typically different types of) memory (variously) required for each of compressing the image, storing the compressed image and decompressing the compressed image; (ii) permitting the printing of a high quality image, especially such as may be in color; and (iii) supporting a sequential decompression of the compressed image from one side of the image to the otherxe2x80x94normally from the top of the image to the bottom. The technique is thus very suitable for applications such as printing images with inkjet or laser printers. Images transmitted in accordance with the present invention need not be printed. They can be, for example, transmitted over networks such as the Internet, displayed on video consoles in cars and on picture phones, and used in video conferencingxe2x80x94the main characteristic being that the images are sequentially decompressed, and are normally sequentially displayed in successive parts as decompression transpires.
In so functioning, the technique of the present invention may first be compared to the prior art apparatus and methods described in four (4) existing U.S. patents of Jerome Shapiro.
U.S. Pat. No. 5,315,670 to Jerome M. Shapiro for DIGITAL DATA COMPRESSION SYSTEM INCLUDING ZEROTREE COEFFICIENT CODING assigned to General Electric Company (Princeton, NJ) concerns a data processing system augmenting compression of non-zero values of significant coefficients by coding entries of a significance map independently of coding the values of significant non-zero coefficients. In the system a dedicated symbol represents a zerotree structure encompassing a related association of insignificant coefficients within the tree structure, thereby compactly representing each tree of insignificant coefficients.
The zerotree symbol represents that neither a root coefficient of the zerotree structure nor any descendant of the root coefficient has a magnitude greater than a given reference level. The zerotree structure is disclosed in the context of a pyramid-type image subband processor together with successive refinement quantization and entropy coding to facilitate data compression.
U.S. Pat. No. 5,321,776 to Shapiro for DATA COMPRESSION SYSTEM INCLUDING SUCCESSIVE APPROXIMATION QUANTIZER also assigned to General Electric Company concerns the same data processing system where the zerotree symbol represents that a coefficient is a root of a zerotree if, at a threshold T, the coefficient, and all of its descendants that have been found to be insignificant at larger thresholds, have magnitudes less than threshold T.
U.S. Pat. No. 5,412,741 to Shapiro for an APPARATUS AND METHOD FOR COMPRESSING INFORMATION assigned to David Sarnoff Research Center, Inc. (Princeton, NJ) concerns an apparatus that achieves high compression efficiency in a computationally efficient manner. A corresponding decoder apparatus, and methods, are also disclosed. The technique uses zerotree coding of wavelet coefficients in a much more efficient manner than previous techniques. The key is the dynamic generation of the list of coefficient indices to be scanned, whereby the dynamically generated list only contains coefficient indices for which a symbol must be encoded. This is claimed to be a dramatic improvement over the prior art in which a static list of coefficient indices is used and each coefficient must be individually checked to see whether (i) a symbol must be encoded, or (ii) it is completely predictable. Additionally, using dynamic list generation, the greater the compression of the signal, the less time it takes to perform the compression. Thus, using dynamic list generation, the computational burden is proportional to the size of the output compressed bit stream instead of being proportional to the size of the input signal or image.
Finally, U.S. Pat. No. 5,563,960 to Shapiro for an APPARATUS AND METHOD FOR EMPHASIZING A SELECTED REGION IN THE COMPRESSED REPRESENTATION OF AN IMAGE also assigned to David Sarnoff Research Center, Inc. concerns certain image analysis applications where it is desirable to compress the image while emphasizing a selected region of the image. The invention is a means for allocating more bits, and thus better quality in the decoded image, to the selected region at the expense of other regions of the image. This allows efficient compression of the image for storage or transmission with those regions deemed to be important preserved at high quality and other regions stored with minimal quality to preserve the context of the image.
The technique of the present invention may also be compared with a published paper of Said and Pearlman. See A. Said and W. A.
Pearlman; A new, fast, and efficient image codec based on set partitioning in hierarchical trees, IEEE Transactions on Circuits and Systems for Video Technology, 6(3):243-250, June 1996.
Finally, the technique of the present invention may be compared with a published paper of Teng and Neuhoff of the University of Michigan [hereinafter xe2x80x9cTeng and Neuhoffxe2x80x9d] See Chia-Yuan Teng and Dave L. Neuhoff; Quadtree-guided wavelet image coding, Proceedings DCC""96 (Data Compression Conference 1996, Snowbird, Utah, USA, Mar. 31-Apr. 3, 1996.) pp. 406-15.
In the quadtree-guided wavelet compression technique of Teng and Neuhoff only a single level of wavelet decomposition is used; thus it has only 4 subbands. The quantization and encoding of the wavelet coefficients proceeds as follows. The lowxe2x80x94low band is divided into blocks of size k by k (typically k is 8 or 16) which are processed in raster scan order. For each block, the lower right corner (called the foot) is predicted from the pixel above the block in the same column, and to the left of the block in the same row. The prediction error is quantized and added to the prediction. Using this foot value, as well as the reconstructed values of neighboring pixels from previously encoded adjacent blocks, the rest of the pixels are predicted using linear interpolation. A quality test then checks how well the interpolation approximates the real pixel values. If the block passes the test, the quantized prediction error for the foot is transmitted using a variable length code. If not, the block is split into four sub blocks (a quadtree subdivision), and the same procedure is repeated for those. Coefficients in the high frequency bands are scalar quantized and run-length encoded in a manner that depends on how many times the corresponding coefficients in the lowxe2x80x94low band were subdivided in the quadtree subdivision.
The present invention will be seen to involve the use of (i) a wavelet hybrid-filtering scheme and (ii) a line-by-line wavelet decoding technique. After the invention has been taught, the patents of Shapiro, and the papers of (i) Said and Pearlman, and of (ii) Teng and Neuhoff, will be re-visited in order that the present invention may be contrasted with this prior art.
U.S. Pat. No. 5,710,835 to Bradley for STORAGE AND RETRIEVAL OF LARGE DIGITAL IMAGESxe2x80x94assigned to the same assignee as is the present invention (The Regents of the University of California, Office of Technology Transfer, Alameda, Calif.)xe2x80x94concerns image compression and viewing. The methods of the invention involve (1) performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets Tij(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles Tij(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles Tij(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display.
The repeated process whereby image views are specified may take the form of an interaction with a computer pointing device on an image display from a previous retrieval.
When the present invention is later understood, it may beneficially be compared to U.S. Pat. No. 5,710,835 to Bradley [the xe2x80x9cBradley patentxe2x80x9d]. In general the Bradley patent gives a way of retrieving portions of a wavelet coded image in a memory efficient manner. However it does not specifically consider full horizontal strips of the image and thus is not as efficient as the present invention will prove to be in that casexe2x80x94even though Bradley can handle that case.
There will seen to be still other differences between the present invention and the methods of the Bradley patent. First, as is manifest at column 1, lines 32-40, of the Bradley patent, the purpose of his method is (primarily) for viewing large image on screens, and not (primarily) for printing, as the present invention will be seen to be. The object of the present invention will be seen to be to reconstruct an entire image in a xe2x80x9csliding window stylexe2x80x9d from top to bottom. Bradley""s objects are to view one or more pieces of the image. Bradley incurs an amount of overhead memory with each piece of the image viewed. The present invention will be seen to avoid the overhead as the memory devoted to each strip is exploited again in order to decode the next strip.
At column 3, lines 1-2, of the Bradley patent, data retrieval specifies different viewing resolutions. The present invention will be seen not to so function.
At column 6, lines 6-11, of the Bradley patent a routine using xe2x80x9ctile dependent boundary conditionsxe2x80x9d is taught. The technique of the present invention will be seen to be specifically and specially tailored to horizontal strips, and will thus handle the boundary conditions differently.
At column 7, lines 27-30, the Bradley patent teaches storing a description of the tile (i.e. the shape of the region being coded) and a pointer to the pixel values. The present invention does not require any memory for this purpose because it can be deduced from the shapes used (i.e. the present invention will be seen to use horizontal strips from the far left end to the far right endxe2x80x94an innovation that Bradley did not think of or mention).
At column 7, lines 43-45, of the Bradley patent a wavelet transform is taught that yields more coefficients than the original image. The wavelet transform of the present invention does not.
It is a one-to-one mapping that preserves the amount of data.
At column 10, lines 58-59, the Bradley patent states that xe2x80x9c[t]he invention is primarily intended for use in an interactive applicationxe2x80x9d. The present invention, primarily intended for printers, can also be used for interactive applications.
At column 13, lines 16-18, Bradley periodically compresses certain xe2x80x9csumsxe2x80x9d (numerical values) and sends them to a secondary memory in order to free up space. The present invention will be seen not to require this.
However, and despite these differences and others, the Bradley patent, in showing area-based image data compression, is close prior art to the present invention.
2.2 Desired Improvements to Image Compression Systems, Especially as Regards the Memory Usage of Wavelet-based Algorithms, Particularly Embedded Zerotree Wavelet (EZW) Coding, and Set Partitioning in Hierarchical Trees (SPIHT)
Consider an image compression system in which the decoder is constrained to have limited memory storage. For example, when a computer (e.g., a PC) sends an image to a printer to be printed, the printer may be unable to store the entire image at one time.
This is usually due to limited electronic memory (i.e. RAM) in the printer, in order to reduce manufacturing costs. A printer might typically be able to buffer only a small number of horizontally scanned image lines at one time. For example, some low-cost printers can only store on the order of 50-100 rows of an image at one time. However, the total number of rows of pixels in an image typically falls in the range of 256-2048. Many PCs today simply transmit to the printer one row at a time in uncompressed format (24 bits per pixel for color). The printer only needs to store one row at a time in such a case.
In certain low-cost computer systems, the transmission line between a computer and a printer can only support a relatively slow transmission rate, typically around 100 kbits/sec. This would impose, for example, about 1 minute of delay time to transmit a 512xc3x97512 color image. This motivates the need for data compression that first compresses a digitized image at the computer, and then transmits the compressed version of the image to the printer, and finally decompresses the image at the printer, which in turn prints the image.
There are many good image compression systems in existence today. The best of these typically require a compressed image to be fully decompressed at the printer before any printing of the image can begin. This exceeds the limited available memory in many printers. Thus, an important problem is to find good image compression algorithms which can be decompressed in pieces, from the top of an image to the bottom (i.e., as the paper exits the printer), so that incremental printing can be achieved with limited memory usage. With certain compression algorithms, this constraint is already met. For example, in baseline sequential JPEG (part of the international standard for still-image compression), the image is processed in blocks of 8xc3x978 pixels. The blocks are processed in raster-scan order, so the decoder (at the printer) needs to buffer only a single strip of width 8 at any one time. After printing those 8 lines, the decoder can flush them out of memory and work on the next strip.
However, many wavelet-based algorithms outperform the JPEG technique by considerable margins, both quantitatively and subjectively. The most prominent of these algorithms is due to Shapiro. See the patents of Shapiro discussed in section 2.1 above. See also J. M. Shapiro, Embedded image coding using zerotrees of wavelet coefficients, IEEE Transactions on Signal Processing, 41(12):3445-3462, December 1993. This algorithm was later refined by Said and Pearlman. See A. Said and W. A. Pearlman, op, cit.
The Shapiro technique is known as Embedded Zerotree Wavelet (EZW) coding, and the refinement due to Said and Pearlman is known as Set Partitioning in Hierarchical Trees (SPIHT). These acronyms are further used in the specification of the present invention. Wavelet algorithms such as EZW and SPIHT generally require a complete image to be reconstructed before any particular spatial location in an image can be effectively printed by a printer. This requirement is often infeasible for printing, when memory constraints exist. The same memory constraint problem arises with algorithms based on full-frame Discrete Cosine Transforms (DCT). The present invention overcomes this memory problem for wavelet-based compression algorithms such as EZW and SPIHT.
The present invention concerns compressing (and decompressing) digital image data, particularly for transmitting (typically color) images, particularly to (color) inkjet and laser printers. As well as being transmitted for printing, images compressed in accordance with the present invention can be transmitted and displayed, for example., over networks such as the Internet for display on client computers, by radio over the airwaves for display on video consoles in cars and on picture phones, and over telephone lines for use in video conferencing. The main characteristic of image compression/decompression in accordance with the present invention is that images (i) are sequentially decompressed by wavelet based image encoding, and (ii) are normally sequentially displayed in successive parts as decompression transpires. Although it has heretofore been known to compress, and to decompress, images sequentially, and in parts, it has not been possible to do so by use of highly efficient and desirable wavelet encoding.
The method and system of the invention typically provides a compression ratio of up to approximately one hundred to one (100:1); retains very high image quality; is of low computational complexity; is conservative of bandwidth between a computer and a printer; and uses a very limited amount of electronic memory for image decompression in the printer. The overall effect is to provide a low-cost avenue for speeding up the printing (or other piecewise transmission) of images, particularly as may be in color, on (color) printers, especially of the ink-jet and (color) laser types.
The present invention in particular contemplates two improvements to the normal process of 1) performing a Wavelet Transform (a xe2x80x9cWTxe2x80x9d) on digital image data; 2) compressively encoding the Wavelet Transform (WT) data by application of a wavelet-based encoding algorithm such as, notably, the algorithm of Teng and Neuhoff, or the Embedded Zerotree Wavelet (EZW) coding of Shapiro, or the refinement thereof due to Said and Pearlman which is known as Set Partitioning in Hierarchical Trees (SPIHT); and 3) transmitting the compressively encoded digital image data to an image generator (e.g., a printer) where it is 4) decompressed and displayed (or printed), normally piecewise sequentially.
1. Reordering of EZW- or SPIHT-encoded/compressed Image Data
The first improvement is to add a new step, a step 2a) as it were. In this step the EZW- or the SPIHT-encoded image data is re-ordered. It is this re-ordered data that is then 3) transmitted as a (re-ordered) bitstream to the image generator, or printer.
The purpose of the re-ordering is to compress, transmit, and decompress the lines (or groups of, say, eight lines) of a printable image (i) sequentially (ii) at top quality in order that the image may be printed out top to bottom (by lines, or by groups of lines). By this sequential, top-to-bottom, top-quality printing of an image much (i) net printing time, and (ii) printer decoder memory, may both be saved.
To understand that the xe2x80x9creorderingxe2x80x9d of the EZW- or SPIHT-encoded image data is not merely a parsing of the image, so as to transmit a first few lines followed by a next few lines, it is necessary to understand how the EZW and SPIHT algorithms work. First, these algorithms are typically used on large images that are some hundreds of pixels or more in each direction. They are so used just because these are the common sizes of digitized images, and also because, if the image is small, say only of size 32xc3x9732, then but little net savings of bits can be obtained from bothering with compression at all. Should EZW or SPIHT be used to do compression on, for example, a small 32xc3x9732 size image (as would actually require a trivial modification to SPIHT even to make it work on an image that small), then the image quality that would be obtained for a given compression ratio for the small image would not be as good as the quality that would be obtained at the same compression ratio for a big image, since the algorithm would be less efficient. Therefore, it is unsatisfactory to transmit successive sub-images of, say, eight horizontal rows each image.
Second, the (un-reordered, normally derived) image data compressed by the EZW or SPIHT algorithm is not successively decompressed over time from one region of the image to anotherxe2x80x94and particularly not from a top region of the image to a bottom region of the imagexe2x80x94but is rather progressively decompressed so as to render the entire area of the image with increasing fidelity. An EZW- or an SPIHT-compressed image does not have to be communicated in its entirety before it can commence to be decompressed. Using the EZW or SPIHT algorithms, the decoder can begin decompressing the image right away when it gets just a few dozen bits, or a few hundred bits. But, what the decoder obtains at that point is not the top few lines of the image. Instead, the decoder obtains a coarse-resolution, poor quality version of the entire image. As more bits arrive, the decoder improves the quality over the whole image. So the top few lines (just like all the other lines) are not regenerated at highest quality, suitable to be printed, until the entire bit stream has been communicated.
Therefore, EZW or SPIHT encoding and decoding is progressive; the decoding progressively renders a quality improvement over the entire image field. This type of image encoding/decoding is useful when people are receiving a compressed image and are trying to view it on a monitor. While they are waiting for the whole image to arrive, they still get to look at some low quality full-frame version on their monitor. This type of progressive image encoding decoding is common on the Internet.
That is distinctly not what is required for printing (and certain other forms of image transmission such as, for example, the display of maps in cars). For printing it is not useful to initially decode the full-frame image at low quality. Instead, just the top few lines are wanted at highest quality, and then the next few lines, and so on, top-to-bottom within the image.
Accordingly, the first improvement of the present invention is to break apart, and to reorder, the EZW- or SPIHT-encoded (compressed) image data, transmitting data sufficient to first print a first group of lines first (typically eight such lines), and printing this (typically eight) first group of lines even while a second group of lines is being transmitted, then printing this second group of lines while a third group of lines is being transmitted, and so on. Clearly the overlap between printing (or displaying) and transmission makes that the transmission delay is not so onerous, nor is the memory requirement of the printer as large, as heretofore.
2. Filtering With a Wavelet Transform (WT) Where the Number of Decomposition Levels in the Vertical Direction is Less Than the Number of Decomposition Levels in the Horizontal Direction, and/or (ii) the Filter Lengths Employed in the Vertical Direction are Shorter Than Those Used in the Horizontal Direction
Alas, although the image transmitted is highly compressed (as is a function of the excellence of both (i) wavelet-based encoding and (ii) zerotree compression algorithmsxe2x80x94neither of which was invented by the present inventors), and may be quickly sequentially decoded in a line-by-line fashion in but a modestly sized decoder memory, and stepwise printed line-by-line top-to-bottom in a printer, the first improvement of the present invention taken alone does not represent ultimate image compression that can be realized.
The second improvement of the present invention deals with the filtering employed in the Wavelet Transform (a xe2x80x9cWTxe2x80x9d). This filtering is primarily characterized both by the number of levels of decomposition, and by the length of the filters used. The second improvement of the present invention is this: the number of decomposition levels, as well as the length of the filters, need not be, and, indeed, should not be, the same for (i) the horizontal and (ii) the vertical directions in the encoding (compressing) of digital image data that is to be printed. In particular, the number of decomposition levels in the vertical direction may beneficially be less than the number of decomposition levels in the horizontal direction, and the filter lengths employed in the vertical direction may beneficially be shorter than the ones used in the horizontal direction. Additionally, different filter lengths can be used at different levels of the decomposition, so that, for example, the vertical and horizontal directions may use filters of equal length for the first few levels of decomposition, and one may switch to using shorter filters for the vertical direction alone for the remaining levels of decomposition.
A shorthand way to regard this second improvement of the present invention is as follows. Let B represent a short filter such as the Haar wavelet filter of length 2. Let C represent a longer filter such as the Daubechies wavelet filter of length 8.
The filtering employed in a traditional wavelet/subband coding (compression) approach can be expressed, for example, as WT(6C,6C) where the first 6C indicates that the horizontal direction is getting filtered with the length-8 filter for 6 decomposition levels, and the second 6C indicates that the vertical direction is getting filtered with the length-8 filter for 6 decomposition levels. A less complex traditional filtering could be indicated by WT(4B,4B) where both directions get the same short filter and smaller number of decomposition levels. In the prior art, filtering expressed this way would always be of the form WT(h,v) where h=v; the horizontal and vertical directions are treated in like manner.
In accordance with the present invention, filters are employed where h and v differ in a number of ways. For example, WT(6C,6B) means that the vertical direction uses a shorter filter. WT(6C,4C) means that the vertical direction uses fewer levels of decomposition, but the same filter length. WT(6C,3C+3B) means that the vertical direction uses 3 levels of decomposition with the longer filter followed by 3 with the shorter filter; the horizontal direction uses the same total number of decomposition levels, but only with the longer filter. A final example is WT(6C,2C+3B) where the vertical direction, differs both in the filter lengths and in the total number of decomposition levels.
The benefit so realized by reducing (i) levels of decomposition and/or (ii) filter length, is that, especially for printed images, the amount of decoder memory is reduced while the quality of the decompressed printed image at a given bit rate does not suffer appreciable deterioration.
The aggregate effect of both improvements depends upon the operational computer image processing and printing (or color printing) environment in which the improvements are deployed. Not all printers are limited in their printing speed by their internal page image decoding operations, nor by the speed of their receipt of data from a computer, but are instead substantially mechanically limited in their speed of printing. Future computer I/O peripheral channels and busses, such as the fiber optical channel, may make it tenable to communicate voluminous image data with less, or no, compression. However, at the present time, some color ink jet printers (i) are incurring a significant proportion of their overall cost; of manufacture in their requirement for (typically semiconductor) memory, and, with common use of an inexpensive industry-standard xe2x80x9cserialxe2x80x9d or xe2x80x9cparallelxe2x80x9d I/O interface, (ii) are devoting a significant portion of the overall printing time in receiving and interpreting image data. The present invention easily permits that a printer constructed at less expense than heretofore, and by use of the same inexpensive and unimproved channel interface as heretofore, may receive, decode and print image data at equal quality at printing speeds about twice as fast as heretofore.
3. Re-ordering Wavelet-encoded Image Data
Therefore, in one of its aspects the present invention will be recognized to be embodied in an improvement to a stepwise method of (i) wavelet-encoding an image of finite horizontal and vertical extent into encoded data representative of the image, (ii) communicating the wavelet-encoded image data to an image generator, (iii) decoding the wavelet-encoded image data in the image generator, and (iv) generating the image from the decoded image data.
The improvementxe2x80x94between the steps of the (i) wavelet-encoding and the (ii) communicatingxe2x80x94consists of re-ordering of the wavelet-encoded image data so that the (ii) communicating is of successive portions of the wavelet-encoded image data representing successive vertical portions of the image, so that the (iii) decoding is of these successively communicated portions; and so that the (iv) image generating is of successive portions of the image from top to bottom.
The (iii) decoding of the re-ordered image data portions, and the (iv) image generating of the image portions associated therewith, may thus be beneficially time-overlapped, certain vertical portions of the image being generated even before later image portions are communicated. Moreover, this entire process ensues even while highly efficient and effective wavelet encoding is employed.
The (i) wavelet-encoding is preferably with (a) the Teng and Neuhoff algorithm, or (b) an embedded zerotree algorithm. If with (b) an embedded zerotree algorithm, it is more preferably with the Embedded Zerotree Wavelet, EZW, of the Set Partitioning in Hierarchical Trees, SPIHT, algorithms. The re-ordering would then consist of rearranging the order of an EZW or a SPIHT bitstream that is an object of the communicating.
More generally, the re-ordering preferably consists of a line-by-line reordering of the wavelet coding by determining each of (i) the first minimum set of wavelet coefficients that must be received by the generator in order to generate one single portion of the image, (ii) a next minimum set of additional wavelet coefficients that must be communicated so that a next successive portion of the image can be printed, and (2) how much of the current encoded data can be expunged from a memory within the image generator.
The re-ordering may alternatively be characterized as a sliding window in the spatial domain of the wavelet-encoded image, in which sliding window (i) wavelet coefficients enter the window, (ii) are used for a few rounds of inverse filtering to permit the generating of a few successive vertical portions of the image, and then (iii) exit the window.
In accordance with the second aspect of the invention, the method preferably still further includes, at a time before the (i) wavelet-encoding, filtering the image data with a mathematical filter applied in both horizontal and vertical spatial directions of the image with a different number of levels of decomposition in each direction and/or filtering the image data with a mathematical filter having a different length in each of the horizontal and vertical spatial directions.
4. Spatially Differentially Filtering Image Data
In another of its aspects the present invention will be recognized to be embodied in an improvement to a method of (i) filtering digitized image data of an image having finite horizontal and vertical dimensions to produce filtered data representative of the image, (ii) encoding the filtered image data into encoded data representative of the image, (iii) communicating the encoded image data to an image generator, (iv) decoding the encoded image data in the image generator, and (v) generating the image from the decoded image data.
The improvement is to filter with (i) a filtering algorithm applied in both horizontal and vertical spatial directions of the image with a different number of levels of decomposition in each direction and/or with (ii) a filter having a different length in each of the horizontal and vertical spatial directions.
When a different number of decomposition levels are used, then the number of decomposition levels in the horizontal spatial direction is normally greater than the number of decomposition levels in the vertical spatial direction. When a differential filter length is used, then the length of the filter in the horizontal spatial direction is normally greater than the length of the filter in the vertical spatial direction. (The opposite can be true: the number of decomposition levels can be greater in the vertical than the horizontal spatial direction; the length of the differential filter can be longer in the vertical spatial direction than in the horizontal. Such would be the case for, by way of example, a printer that prints sideways, as many of 11xe2x80x3xc3x9717xe2x80x3 capacity do when printing 1xc2xdxe2x80x3xc3x9711xe2x80x3 (e.g., the Hewlett Packard Model 4V)).
This second aspect of the present invention may beneficially be combined with the first aspect. Namely, a re-ordering of the encoded image data may transpire so that the (iii) communicating is of successive portions of the encoded image data representing successive vertical portions of the image, so that the (iv) decoding is of these successively communicated portions, and so that the (v) image generating is of successive portions of the image from image top to image bottom.
The simultaneous (i) communication bandwidth conservation, and (ii) decoder memory economy, of the present invention are becoming even more important as high-color-resolution color imagesxe2x80x94typically represented by up to twenty-four (24) bits per pixelxe2x80x94are rendered at ever higher resolutions. Every doubling of resolution in a linear direction multiplies the image data by a factor of four (xc3x974); motivating ever more, and better, (i) compression and (ii) time management of the data compression, communication, and decompression processes.