Data compression signal processing mechanisms are used in conjunction with a variety of imagery data (e.g. video, facsimile data) and for color and black-and-white applications. One of the more commonly used encoding schemes is the so-called block coding technique which separates a given picture into a matrix of image blocks which, in turn, is subdivided into a plurality of pixels. One type of block encoding mechanism that is typically employed for imagery data compression is the block truncation coder which operates on subdivided blocks of data that make up the image and, for each block, generates a pair of threshold values "a" and "b", that are used in the data reconstruction process, and an accompanying bit map "m" of binary data. With the bit rate per block of data being defined as the ratio of the total number transmitted bits (here, the sum of the number of bits that constitute the bit map "m" and the threshold values "a" and "b") to the number of pixels of the block of data, it can be seen that, for a given grey level resolution of the threshold values, the bit map "m" constitutes a large overhead that governs the extent to which the bit rate can be compressed. Consequently, if a lower bit rate is to be achieved, it is necessary to encode the bit map to a smaller size.
One proposal for reducing the size of the transmitted bit map, described in an article entitled "Multilevel Graphics Representation Using Block Truncation Coding" by O. R. Mitchell et al., Proceedings of the IEEE, Vol. 68, NO. 7, July 1980, involves transmitting only a portion of the bits of the map, with reconstruction of the image being accomplished according to a fixed logical rule using a look-up table. Unfortunately, this technique is not adaptive to changing image statistics; i.e. The logical look-up table is fixed in the reconstruction unit for all classes of input data.
A second approach, described in an article by G. R. Arce et al., entitled "BTC Coding Using Median Filter Roots", IEEE Transactions, Vol COM-31, No. 6, June 1983, is based upon the use of median roots to transform the bit map into a two-dimensional median filter root signal using a trellis code, making the mechanism impractical from a hardware implementation standpoint.
A third scheme, described in an article by V. R. Upkidar et al., entitled "BTC Image Coding Using Vector Quantization", IEEE Transactions, Vol. COM-35, No. 3, March 1987, is based upon vector quantization (to be described in detail below) of the bit map. However, the method is not adaptive to the input data and depends upon the use of a predetermined training sequence to derive an optimum vector code set, which is then fixed with respect to the data being processed.
Vector quantization is a mechanism for mapping a sequence of discrete input vectors into a smaller number of output vectors, in order to reduce the number of bits required to represent the input vectors. For example, a bit map of binary digits ("1"s and "0"s) representative of an array of x by x pixels has an input vector set of 2 to the x-squared values. The code book or look-up table for any class of input data is derived from a predetermined training sequence of input vectors. The optimum code book, whose design algorithm is described in an article by Y. Lind et al., entitled "An Algorithm for Vector Quantizer Design" IEEE Transactions, Vol. Com-28, No. 1, Jan. 1980, is the one that yields the smallest difference or least distortion between the training sequence of vectors and reproduced vectors. Unfortunately, the code book must operate on data outside the training set (i.e. with images that were not used for the code book design), thereby making system performance dependent upon the structure of a predetermined look-up table. As a result, images are characterized into broad categories which average out the individual variations within the image. Areas of images may be different, at the edges, for example, and are therefore processed with larger errors.
This problem can be reduced to some extent by classifying the image into different input sets, processed separately using different code books derived, in turn, from different training sets, as described in an article by B. Ramamurthi et al., entitle "Classified Vector Quantization of Images", IEEE Transactions, Vol. COM-34, No. 11, Nov. 1986.