Prior General-Purpose Compaction Schemes
Many approaches have been devised to efficiently compact a user's source data, based on recognizing redundant information in the source message. Some schemes, such as Huffman coding, work on a character-by-character basis, and define an encoded representation that assigns the fewest bits to the most commonly used characters. However, the message cannot be decoded, unless either the statistical probabilities of the different characters are known a priori to the decoder, or the message contains a "dictionary" that discloses the specific character-to-bit assignments that were used. Neither approach is compatible with bar coding applications or radio frequency identification RFID applications.
Many other schemes, such as those used in file-compression programs like PKZip, inherently include a "dictionary" of compressed strings that is built up as each file is analyzed for compression. However, a very short message (such as a bar-coded serial number) seldom contains enough substring redundancy for this approach to work.
These general-purpose compression schemes are not well-suited to solving the unique problem faced by the bar code industry, which is to encode a relatively short messages (of nearly random character content) in the smallest possible physical space.