To be able to process images with a computer system they must be converted to a data format which is computer compatible. This conversion is called digitalizing in digital image processing. The original image data is transformed into a computer-conforming data format. The transformations can be available as two-dimensional or multidimensional functions for the processing. Upon the taking of the picture, a continuous scene is spatially discretized. One possible mathematical description of digitalizing image data uses a notation in the form of image matrices. The image S (the scene S) is a rectangular matrix (image matrix) S=(s(x, y)) with image rows and image columns. The row index is x and the column index is y. The image point (pixel) at a location (row, column)=(x, y) determines the gray value s(x, y). Thus elemental regions of the scene are each imaged as a pixel of the image matrix. For digitalizing the image data, a rastering (grid, scanning) and quantizing are required. In the rastering, the image to be digitalized is subdivided into area segments of the raster by superimposing on the image a rectangular or square grid. In the quantizing each area segment of the raster is assigned a gray value s(x, y) of a gray scale G. The determination of this gray value can be effected point by point or by averaging over the raster area.
To acquire digital image data, apart from CCD cameras, in many cases CMOS cameras are also used. These cameras are most commonly employed in science and industry, for example to take pictures of crashes and for monitoring rapid technological procedures in production. CMOS (Complementary Metal Oxide Semiconductor) cameras have, by comparison to CCD image sensors, a higher brightness dynamic as well as higher permissible operating temperatures. The light particle (photon) impinging on the photodiodes of the CMOS camera are converted into electric currents. The light-sensitive photodiodes are associated with a plurality of transistors. The CMOS camera pixels determine their gray values (signal) from the actual photo current of the photodiodes. Each pixel clan be individually read and evaluated. This enables optional access to the respective image part of interest and has special advantages in industrial image processing. With the aid of CMOS cameras, very high image rates can be produced (extreme time magnification). The access time for individual pixels are thus naturally very short, that is the actual photo current has available only a very short time constant.
At high contrast and with objects that are moving or objects which are rapidly changing, at high image rates the not yet decayed strong currents from prior bright image regions tend to dominate the later darker signal regions. The “parasitic” capacities which are contained in the pixel circuit (see [4] (T. Seiffert, Measurement Processes and Parameters for Estimating the Dynamic Contrast Resolution Properties of Electronic Cameras, Diploma Dissertation, Kahlsruhe University (TH), 2001), heading 8.3, give rise to a blurring in time of the pixel signal. This effect is in the context of the present invention, referred to as a capacitive afterglow effect. The very high gray value resolutions of the camera which result from the usual logarithmic treatment is thus greatly reduced or highly error-prone values are supplied. If a bright signal, for example, moves over a relatively dark background, for example a weld point over a sheet metal workpiece, a tail is formed behind it (compare [4], page 37). This tail overlies the dark background. For example in the welding of sheet metal, one must be able to monitor the weld seam directly behind the weld point and thus it is necessary to wait until the tail has disappeared from the seamed region.
There are many possibilities for correction of this detrimental effect of capacitive afterglow upon the speed of the image acquisition:
By a moderate background lighting, the discharge of bright pixels can be significantly accelerated. If the direction of movement is known, with certain applications it is possible to detect the position of real objects or of projected patterns which can be analyzed with respect to the movement direction, that is one can detect the rapid change from dark to bright. These processes however are suitable only for certain applications when for example lighting or the overall image acquisition situation is known or controllable. The indicated strategy for evaluation cannot reconstruct nonvisible signal components but rather will only ignore or bypass them.
The evaluation of optical processes, for example, like the decay of the temperature in thermal images is carried out by means of mathematical models. Thus, for example, from [3] (Horst W. Haussecker and David J. Fleet, Computing Optical Flow with Physical Models of Brightness Variation IEEE Trans. PAMI, Vol. 23, No. 6 pp 661-673, June 2001), FIG. 9, the use of a differential equation (DGL) is known. In these processes the physical characteristics of the image acquisition is described in a differential equation and unknown parameters of this differential equation are numerically approximated. Known local approximation methods include, among others “ordinary least squares (OLS)”, “total least squares (TLS)” and “mixed OLS-TLS” [5] ©. Garbe, Measuring Heat Exchange Processes at the Air-Water Interface from Thermographic Image Sequence Analysis, Doctorate Dissertation, Heidelberg University, 2001), all of which are special forms of the least square method and can be drawn from any standard work dealing with numerological methods. Furthermore, so-called variation methods with data and smoothing procedures are used (see for example [1]: B. Jahne, Digital Image Processing, 4th volume, Springer, 1997, and [6]: (J. Weickert and C. Schnorr, Variational Optic Flow Computation with a Spatio-Temporal Smoothness Constraint, Technical Report 15/2000 Computer Science Series, July 2000).
The use of corresponding methods for the evaluation of CMOS camera images has not been previously described in the literature.