Document retrieval systems typically apply sophisticated processing algorithms to a document which has been captured in digital form. For example, some algorithms may involve thresholding of the data to remove background areas of the document or generation of histograms for exposure control. Any non-uniformity in the retrieval system will effect the captured data and may result in false decisions by these processing algorithms.
One well known and well defined source of non-uniformity can be introduced by using a lens to focus the image onto the image capture device. The resulting non-uniformity is characterized by the "cos.sup.4 " law which is well known in the optical art.
Complex lenses which may consist of multiple lens elements, apertures, etc. may not directly obey the above-mentioned law but will have a characteristic that is repeatable from lens to lens during the manufacturing process.
Non-uniformities which are not well defined may arise from other sources also. Some examples are: CCD pixel sensitivity variation which may on the order of 5-10% and spot uniformities that may occur in an illumination source such as by a lamp filament.
Many methods have been previously disclosed which correct for the combined effects of illumination system non-uniformity and CCD pixel sensitivity variation.
In U.S. Pat. No. 4,392,157, granted in the name of Garcia et al discloses a technique whereby a two-dimensional scan region is divided into groups of pixels. Those groups which have responses that deviate more than a certain amount from a mean value will be corrected. The location of the groups to be corrected is encoded as a distance from the last group which was corrected. This method conserves memory space but the overall correction may be somewhat course if the memory is small. Conversely, the memory requirement will become large as finer correction resolution is applied.
U.S. Pat. No. 4,343,021, in the name of Frame discloses a technique whereby the two-dimensional scan area is again divided into an arbitrary number of elements. A correction factor is determined for each element during a calibration scan and used to correct the response of each group during the subsequent scans. This method again suffers from a memory requirement that grows as the correction resolution increases.
U.S. Pat. No. 3,902,011, to Pieters et al discloses a technique whereby a number of spaced apart points in the two-dimensional scan area are sampled. A correction factor is determined for each point. During subsequent scans, the correction factors are used in conjunction with interpolation to arrive at a better correction factor for each pixel. Interpolation gives a better correction than that disclosed by Frame for a given memory size. If a better correction factor is required for the actual non-uniformity, then the memory size again could become very large.
The prior art as exemplified by the aforementioned three patents has illustrated various techniques that attempt to reduce the amount of memory required for storing correction factors. These techniques all suffer from the fact that the number of required memory locations increases as the square of the linear resolution. For example, a system with a resolution of 10 elements along both the X and Y scan directions would required 100 memory locations (10.times.10). If the X and Y scan resolutions were doubled to 20 elements, the memory requirement would increase to 400 locations (20.times.20), a four-fold increase! If the X and Y scan resolutions were again doubled to 40 elements, the memory requirement would increase to 1600 locations (40.times.40), a sixteen-fold increase!