This invention relates to digital images, generated either directly by a digital acquisition system or digitized after their acquisition.
More precisely, the invention relates to a process for processing a source sequence of digital images that have been damaged (i.e. that have high noise) in order to obtain an output sequence of corrected digital images.
In general, the noise that affects digital images results in a degradation of the contrast of these images as a function of the noise intensity and a loss of information in damaged areas of these images.
There are several types of noise, particularly noise due to movement or defocusing of the sensor making the images blurred (motion/focus blur), speckle noise (in the form of spots), or noise due to measurements (additive or multiplication noise).
Therefore, noise effects should be compensated in order to restore a sequence of images with an adequate quality.
There are several known techniques of correcting damaged image sequences, particularly such as inverse filtering or calculating the average of the image sequence.
The known inverse filtering technique consists of applying the inverse transfer function to the damaged image. This assumes that the noise type (or an approximation of the noise type) is known. This is not always the case. Furthermore, this known inverse filtering technique is not very efficient when the noise/signal ratio is high.
In general, the second known technique that consists of calculating the average of the image sequence is not capable of providing sufficient correction to the damaged images.
The known inverse filtering technique has the major disadvantage of being specific to a particular noise type and that it introduces errors when the image sequence contains moving objects. Consequently, it will generate additional information losses when it is used to correct a noise type other than the noise type for which it was designed.
Furthermore, in most of these known techniques, each image in the sequence is corrected in several successive passes, each pass corresponding to a correction filtering, the parameters of which may be modified by an operator (this is referred to as interactive filtering). The time necessary to execute these successive passes makes it impossible to correct the sequence of damaged digital images in real time.
The objective of the invention is to overcome these various disadvantages in the state of the art.
More precisely, one of the objectives of this invention is to provide a process for processing a source sequence of damaged digital images, of the type capable of providing an output sequence of corrected digital images that is independent of the noise type affecting the images in the source sequence.
Another objective of the invention is to provide this type of process capable of processing the source sequence in real time.
Another objective of the invention is to provide a process capable of restoring a source sequence of images representing a panorama (with or without moving objects), and the contrast of these images.
These various objectives, and other objectives that will become apparent later, are achieved according to the invention by means of a process for processing a source sequence of damaged digital images of the type capable of producing an output sequence of corrected digital images, each of the damaged or corrected digital images being described pixel by pixel, each of the said pixels being characterized by an amplitude level among a plurality of possible amplitude levels,
characterized by the fact that the said process includes the following main basic steps, for each damaged digital image in the said source sequence:
for all pixels in the damaged image, calculate a first parameter called the global parameter, estimating the correction to be made to each pixel in the damaged image as a function of the said set of pixels in the damaged image and the set of pixels in a reference image, the said reference image being a corrected image resulting from the correction of the image preceding the said damaged image in the said source sequence;
for each given pixel in the damaged image, calculate a second parameter called the local parameter, estimating the correction to be made to the said given pixel as a function of the given pixel and other pixels called neighboring pixels, located within a predetermined vicinity of the said given pixel;
for each given pixel in the damaged image, calculate a third parameter called the temporal parameter, estimating the correction to be made to the said given pixel as a function of the said given pixel and the pixel in the reference image with the same spatial coordinates as the said given pixel;
for each given pixel in the damaged image, calculate a correction factor by combining the said first, second and third estimating parameters associated with the said given pixel;
correct each given pixel in the damaged image using a predetermined correction strategy, in order to obtain the pixel corresponding to the corrected image in the output sequence as a function of the correction factor associated with the said given pixel, the given pixel in the damaged image and the pixel in the reference image having the same spatial coordinates as the said given pixel.
Thus, the general principle of the invention consists of processing each of the damaged images forming the source sequence. Each pixel in each damaged image is corrected as a function of a correction factor associated with it and which is determined by combining three distinct parameters (the global, local and temporal parameters respectively) estimating the correction to be made to the pixel.
Due to this combination of three parameters, each supplying distinct information about the correction to be made, the process according to the invention is not interactive and all that is necessary is a single processing pass for each damaged image. Furthermore, most calculations can take place in matrix form. Consequently, the process according to the invention is ideal for use in real time.
Furthermore, this combination of three parameters makes the process according to the invention practically independent of the noise type affecting the images in the source sequence. This means that only a limited number of assumptions are necessary about the noise type that affected the images in the source sequence. For example, it is possible to only consider assumptions about how noise is distributed, intermittently. In other words, the process according to the invention can be adapted and is suitable for all types of noise to be compensated.
The first parameter (the global parameter) is common to all pixels in the same damaged image, and provides information about the general quality of this damaged image, by comparison with the previous corrected image (or the reference image).
Beneficially, the said calculation of the first parameter P1 for all pixels in the damaged image can be made using the formula
P1=K+fE(H1,H2)
where: K is a predetermined offset value;
H1 is a first histogram of the amplitude levels of pixels in the damaged image;
H2 is a second histogram of amplitude levels of pixels in the reference image;
fE is a predetermined error function used to calculate a variation between two functions.
For example, the predetermined error function may be based on the least squares method. It may also be based on the differences of the variances of histograms for the damaged image and for the reference image.
The predetermined offset value, if it is not equal to zero, prevents the first parameter from being zero when the two histograms are identical (if fE (H1; H2)=0).
The second parameter (the local parameter) is specific to each pixel in the same damaged image. It provides information about whether there is a spatial discontinuity, by comparing with neighboring pixels in the same damaged image. Noise results in a fairly pronounced spatial discontinuity in the same image. Consequently, detection of this type of spatial discontinuity in a pixel suggests a fairly high probability that the pixel is affected by noise.
Advantageously, the said calculation of the second parameter is done simultaneously for all pixels in the damaged image, and is made using the following formula:
{MP2}=(1/xcex12)xc2x7F2({It}xe2x88x92{Itxe2x88x921}
where: {MP2} is a matrix of second parameters P2 each associated with a distinct pixel of the damaged image;
{It} is a matrix of the amplitude values of pixels in the damaged image;
F2 is an average or median or low pass filter, or any other filter adapted to the noise being processed;
xcex12 is a first normalization factor.
It may be possible to use a combination of average, median and low pass filters, instead of using a single filter, in order to combine the advantages of each type of filter.
The third parameter (or the temporal parameter) is specific to each pixel in the same damaged image. It provides information about a change in the amplitude value by comparison with the pixel in the previous corrected image (the reference image) with the same spatial coordinates. Noise also results in a fairly pronounced temporal discontinuity between two successive images. Consequently, detection of this type of temporal discontinuity in a pixel indicates a fairly high probability that this pixel is affected by noise. However, it should be checked that this temporal discontinuity is not due to a moving object between successive images.
Preferably, the said calculation of the third parameter is done simultaneously for all pixels in the damaged image, and is made using the following formula:
{MP3}=(1/xcex13)xc2x7F3({It}xe2x88x92{Itxe2x88x921})
where: {MP3} is a matrix of third parameters P3 each associated with a distinct pixel of the damaged image;
{It} is a matrix of the amplitude values of pixels in the damaged image;
{Itxe2x88x921} is a matrix of the amplitude values of pixels in the reference image;
F3 is an average or median or low pass filter,
xcex13 is a second normalization factor.
Filtering makes it possible to ignore changes in values of the pixel amplitude due to object movements between successive images.
Preferably, the said calculation of the correction factor will be made using the following formula:
C=max {I, P1xc2x7fc(P2,P3)}
where: P1, P2 and P3 are the said first, second and third parameters normalized to 1;
fc is a predetermined combination function.
Beneficially, the said predetermined correction strategy consists of calculating an amplitude value of the corrected pixel Ixe2x80x2t (x, y), for each given pixel in the damaged image, using the following formula:
Ixe2x80x2t(x, y)=R1(C)xc2x7It(x, y)+R2(C)xc2x7Itxe2x88x921(x, y)
where: It (x, y) is the amplitude value of the given pixel in the damaged image;
Itxe2x88x921(x, y) is the amplitude of the pixel in the reference image with the same spatial coordinates (x, y) as the given pixel;
C is the correction factor normalized to 1;
R1 and R2 are distribution functions respecting two constraints, namely:
R1 (C)+R2 (C)xe2x89xa61, where R1 is a decreasing function such that R1 (0)=1, and R2 is an increasing function such that R2 (1)=1.
Thus, the following three cases may be distinguished:
if C=0, the amplitude of the corrected pixel is equal to the amplitude value of the pixel in the damaged image (no replacement);
if C=1, the amplitude of the corrected pixel is equal to the amplitude value of the pixel in the reference image (complete replacement);
if 0 less than C less than 1, the amplitude value of the corrected pixel is equal to a weighted sum of the amplitudes of the pixel in the damaged image and the pixel in the reference image (weighted replacement).
According to one beneficial variant, the said predetermined correction strategy consists of calculating an amplitude value of the corrected pixel Itxe2x80x3(x, y), for each given pixel in the damaged image, using the following formula:
Itxe2x80x3(x, y)=R1(C)xc2x7It(x, y)+R2(C)xc2x7Itxe2x88x921(x, y)+R3(C)xc2x7It,F(x, y)
where: Itxe2x80x3(x, y) is the amplitude of the given pixel in the damaged image;
Itxe2x88x921(x, y) is the amplitude value of the pixel in the reference image with the same spatial coordinates (x, y) as the given pixel;
It,F(x, y) is the amplitude value of the pixel in a filtered image with the same spatial coordinates (x, y) as the given pixel, the said filtered image being obtained by an average or median or low pass filter or any other filter adapted to the noise treated in the damaged image;
C is the correction factor normalized to 1;
R1, R2 and R3 are distribution functions respecting the constraints R1(C)+R2(C)+R3(C)xe2x89xa61, where R1 is a decreasing function such that R1(0)=1, and R2 is an increasing function such that R2(1)=1.
Preferably, for each given pixel in the damaged image, the said process includes the following additional basic steps:
calculate an error probability on the amplitude value of the given pixel, as a function of the variation between the number of pixels in the damaged image and the number of pixels in the reference image with the same amplitude value as the said given pixel, the said error probability consisting of a fourth parameter estimating the correction to be made to the said given pixel;
use of the said fourth parameter to weight the first and second parameters or the correction factor associated with the said given pixel.
These two additional steps refine the calculation of the correction factor.
In one preferred embodiment of the invention, the said process comprises the following preliminary basic steps:
calculate a first histogram of the amplitude values of pixels in the damaged image;
calculate a second histogram of the amplitudes of pixels in the reference image;
calculate the variation between the average values of the said first and second histograms,
correct the amplitude values of the pixels in the damaged image as a function of the said variation of the average value, in order to balance the amplitude values of the pixels in the damaged image with the amplitude values of the pixels in the reference image and obtain a precorrected damaged image that is used instead of the damaged image in all the other steps of the said process.
These preliminary steps correspond to use of a gain correction on amplitude values of the damaged image as a function of the amplitude values of the reference image.
In one beneficial variant of the process according to the invention in which a consistent type process is used, the following steps are carried out for each damaged digital image in the source sequence (consisting of a level n digital image to be corrected):
reduce the size of the level n digital image to be corrected, in order to obtain a level n+k digital image to be corrected, where kxe2x89xa71;
process the level n+k digital image to be corrected at hierarchical level n+k, in order to obtain a corrected level n+k digital image;
increase the size of the corrected level n+k digital image in order to obtain a corrected level n+kxe2x88x921 digital image;
reiterate the two previous steps of processing and increasing the size, if necessary, until a level n corrected digital image is obtained;
if necessary, processing of the corrected level n digital image at hierarchical level n,
the process according to the invention being characterized by the fact that at least one of the said processing steps at a given hierarchical level consists of using the said basic steps at least once.
The principle of this processing hierarchization (or processing at several levels) is to reduce the size of the image to be processed in order to be able to apply processing requiring more time or calculation power.
Thus, processing of each damaged digital image in the source sequence is refined and improved. For each hierarchical level, the processing may consist of carrying out the above mentioned basic steps (namely the steps of calculating the various parameters and the correction factor, and then the correction itself) one or more times, and/or applying several filters (for example of the average, median or low pass type).
Preferably, at least one of the said first, second, third and fourth parameters is weighted as a function of the hierarchical processing level that at least partially forms the said basic steps.
Movements of large objects can thus be kept. Note that a level n image (for example n=0), is fairly sensitive to moving objects, whereas a higher level image (for example N+2) is less sensitive since movements are smaller. In order to maintain movements of large objects, it is preferable that the effects of some parameters (for example the temporal parameter) are reduced for higher level processing, and/or the effects of other parameters (for example the local parameter) are given priority.
Beneficially, the process according to the invention applies to the real time processing of a source sequence of damaged digital images.