To transmit or record digital images, use is commonly made of coding methods which reduce the quantity of information transmitted and, consequently, the bandwidth required for this transmission.
Some of these coding methods call upon segmentation of the images into so-called homogeneous regions, that is to say into regions of uniform character according to one or more criteria such as the chrominance and/or the luminance of the pixels (image elements) of this region.
Subsequently, the transmission of the luminance and chrominance data relating to each pixel of a region can be replaced by a simple transmission of the luminance and chrominance data of the relevant region.
Moreover, this segmentation can be called upon in respect of inter-image coding. Thus, before transmitting or recording the information relating to the state, for example of luminance and chrominance, of a region thus defined, one determines whether this information has already been transmitted or recorded. Stated otherwise, one determines whether this region has already been identified in a previously transmitted image.
If this region has not been identified in an earlier image, all the information relating to this region of the image is transmitted.
Conversely, if this region has been identified during the transmission of a previous image, only a signal representing the displacement of this region with respect to the earlier image is transmitted.
Thus, on reception or on reading, this region is reconstructed from the information already transmitted in respect of a previous image.
For example, consider a series of images all comprising one and the same uniform and stationary blue background. In this case, by considering this blue background to form a single region, the information relating to this region is transmitted only for the first image. Thereafter, the coding of the following images only comprises indications indicating the presence and the location of this region.
Thus, the blue background is recreated for all the images from the information transmitted (or recorded) in respect of the initial image.
A description of a known method of segmenting images is given hereinbelow with the aid of the appended figure.
Represented in this figure are the various steps of a method 10 for the segmentation of images, this segmentation comprising two preliminary operations, one, 20, relating to the luminance and the other, 22, relating to the chrominance or to analogous parameters as described later.
It should be stressed that the operation 22 relating to the chrominance of the pixels is dubbed “fragmentation” while the final partitioning of the image according to criteria of chrominance and of apparent motion of the pixels—described later—is dubbed “segmentation”.
The operation 20 comprises the comparison of the luminances of the pixels of a current image 14i with respect to the luminances of the pixels of an earlier image 12i. This comparison determines the motions or the variations of luminance of the pixels of the image 14i with respect to the pixels of the image 12i.
A so-called “optical flow” vector 15p representative of the motion or of this variation of luminance of this pixel P is thus allocated to each pixel P of the current image 14i. To do this, this vector 15p is described by coordinates (dx, dy) characterizing the motion of the image at the pixel P with coordinates (x, y).
During the operation 22, a fragmentation F of the image 14i into fragments F1, F2, . . . FN based on the colour is performed. To do this, the luminance information Yi for the image 14i is used, together with the evaluations 16 and 18 of the apportionment of the red and blue colours via respective signals Ui and Vi for each pixel Pi of the image 14i.
Thereafter, on the basis of the fragments F1, F2, . . . FN thus obtained, the final segmentation of the image 14i is obtained by grouping these fragments Fi into parts Ri, called regions, according to an operation 30 involving motion parameters—this operation being described later.
There are numerous methods making it possible to perform this fragmentation F of the image into fragment F1, F2, . . . FN of homogeneous colour—homogeneity being defined according to the quality of fragmentation demanded.
Moreover, it should be noted that other criteria may be used to perform this fragmentation.
For example, the fragmentation can be performed according to a so-called “texture” criterion based on the spatial distributions of the grey levels in the image. To do this, these textures are characterized by criteria of homogeneity, of contrast, of favoured orientation or of periodicity.
In this description, fragmentation based on the merging, pairwise, of neighbouring fragments of similar colour is used. More specifically, on the basis of two so-called “starting” neighbouring fragments, a new so-called “finishing” fragment comprising the two merged fragments is created.
Thereafter, this method is repeated by considering this finishing fragment to be a new starting fragment.
To determine the sequence of merges—a single merge being performed at each step—we calculate a cost Cfu associated with each envisageable merge.
This cost Cfu, the calculation of which is described hereinbelow, is representative of the difference in colour between the two fragments whose merge is envisaged.
Thus, by merging (Fi∪Fj) the fragments Fi and Fj whose cost Cfu of merging is the lowest among all the envisaged costs of merging, we merge the fragments which are most similar chrominancewise among all the fragments which may be merged.
In this embodiment, the calculation of the cost Cfu of merging between two fragments Fi and Fj is as follows:
      C    fu    =                              N          i                ⁢                  N          j                                      N          i                +                  N          j                      ⁡          [                                    (                                          Y                i                            -                              Y                j                                      )                    2                +                              (                                          U                i                            -                              U                j                                      )                    2                +                              (                                          V                i                            -                              V                j                                      )                    2                    ]      
In this formula, N1 is the number of pixels in the fragment Fi and Nj is the number of pixels in the fragment Fj and (Yi−Yj), (Ui−Uj) and (Vi−Vj) represent, respectively, the differences in luminance and in colours between the two fragments Fi and Fj.
On basis of this merge between the two fragments Fi and Fj, a new fragment Fk=Fi∪Fj comprising the pixels of the two starting fragments Fi and Fj is obtained. This new fragment Fk therefore comprises Nk=Ni+Nj pixels.
This new fragment Fk=Fi∪Fj is then characterized by a luminance Yk equal to the mean of the luminances of the merged fragments, weighted by the number of pixels present in each fragment.
More precisely, when merging the fragment Fi with the fragment Fj, the new mean luminance of the fragment Fk is equal to:Yk=(Ni*Yi+Nj*Yj)/(Ni+Nj)
Likewise, we define the parameters Uk and Vk of colour differences of the new fragment Fk as, respectively:
            U      k        =                  (                                            N              i                        *                          U              i                                +                                    N              j                        *                          U              j                                      )            /              (                              N            i                    +                      N            j                          )              ,          ⁢            V      k        =                  (                                            N              i                        *                          V              i                                +                                    N              j                        *                          V              j                                      )            /                        (                                    N              i                        +                          N              j                                )                .            
Each pixel constitutes a starting fragment for the first fragmentation step. Subsequently, the unmerged pixels remain fragments.
However, a minimum number N of fragments is specified so as to stop the segmentation when this number of fragments is reached.
Thus, we obtain N fragments F1, F2, . . . FN making up the image 14i, each of these fragments comprising a given number of pixels N1, N2, . . . NN.
The cost of the merge between F1 and Fj being proportional to Ni.Nj/Ni+Nj, if we assume an isotropic commencement of merging, then the bigger the number of pixels concerned in a merge, the higher will be the cost of this merge, thus favouring the merging of small fragments and hence an isotropic fragmentation of the image.
Thereafter, in the course of an operation 24, a parametric model characteristic of the motion of all the pixels of each fragment F1, F2, . . . FN as previously obtained is estimated.
To do this, each fragment Fi is characterized by a parametric model 24i of motion linked to the horizontal component dx and vertical component dy of the motion vector 15p of each pixel with spatial coordinates (x,y) of the fragment Fi.
More precisely, an affine model 24 with 6 parameters (a,b,c,d,e,f) is chosen, such that the components dx and dy of the motion vector 15p of a pixel P with coordinates (x,y) are equal todx=a+b.x+c.y , dy=d+e.x+f.y. 
Thus, a single model 24 of motion parameters with 6 components describes the motion of all the pixels of the fragment considered.
The parameters a, b, c, d, e and f of the model 24i are determined according to the so-called least squares technique from the motion vectors 15p estimated at each pixel during step 20.
More specifically, on the basis of a starting model 24′i (a′,b′,c′,d′,e′,f′), its parameters are made to vary in such a way as to minimize a deviation Emo between the “real” vectors 15p and the vectors calculated from this model according to the above formulae.
To evaluate this deviation Emo, we calculate the sum of the squares of the differences between the motion vector 15p of each pixel and the vectors reconstructed from the model described previously.
For example, for a vector 15p with coordinates (dx, dy), modelled by a model 24i (a,b,c,d,e,f):Emo=(dx−(a+b.x+c.y))2+(dy−(d+e.x+f.y))2.
The final parametric model (a,b,c,d,e,f) is obtained when this deviation or “variance” Emo is minimal. This variance in the modelling of the motion of the fragment Fi is then designated as Var24i.
Generally, the evaluations described here take no account of so-called “outlying” values, that is to say of values which differ excessively from the globally estimated values.
In parallel, during an operation 26, the fragmentation F can be compared with the segmentation of the previous image 12i.
For example, any correspondences 28 between the location of fragments Fi and Fj and the location of a region R′i, are identified, this region R′i being the final result of the segmentation of the image 12i according to an operation 30 described later.
These correspondences 28 may be used during operations requiring the tracking of object(s) over a series of images, for example in the road traffic field.
Thereafter, during the operation 30, a method of grouping similar to the method of merging described during the operation 22 is performed. Thus, an iterative process is applied involving two neighbouring fragments which minimize a grouping cost Cre, this grouping creating a new starting fragment.
During this operation 30, the cost of merging is evaluated from the models 24i of motion parameters of each fragment.
Thus, the two fragments grouped together at each step of the operation 30 are the fragments exhibiting the greatest similarity of motion among all the pairs of neighbouring fragments.
For example, by considering two fragments Fi and Fj characterized by respective parametric models 24i (ai, bi, ci, di, ei, fi) and 24j (aj, bj, cj, dj, ej, fj), the similarity of motion between these two fragments is calculated as follows, where it is assumed that the fragment Fi is of larger size than the fragment Fj:
A motion vector 15pj/i is calculated for each pixel of the fragment Fj according to the parametric model 24i for the fragment Fi. Thus, for a pixel with coordinates (xj, yj) of Fj, we calculate the vector 15pj/i with coordinates (dxj/i, dyj/i) according to the following formulae:dxj/i=ai+bixj+ciyj dyj/i=di+eixj+fiyj 
Thereafter, the motion vector 15pj/j of this pixel is evaluated according to the parametric model 24j for this fragment, that is to saydxj/j=aj+bjxj+cjyj anddyj/j=dj+ejxj+fjyj 
Finally, the difference between these two vectors 15pj/i and 15pj/j is evaluated by calculating a difference Δpj/i Δpj/i=(dxj/j−dxj/i)2+(dyj/j−dyj/i)2.
The mean of the Δpj/i of all the pixels of Fj is then calculated so as to obtain an evaluation Δj/i of the difference between the parametric models of the two fragments Fi and Fj.
Subsequently, the fragments Fi and Fj whose difference Δj/i of motion is less than a predetermined threshold—are grouped together—this threshold being all the smaller the greater the agreement between fragments has to be in order for these fragments to be grouped together.
However, during this grouping operation 30, no new motion parameter 24 is calculated in respect of a fragment created by a grouping. This is because these complex calculations would require overly large durations.
This is why, during this operation 30, a motion vector equal to one of the motion vectors of the two grouped fragments is allocated to each fragment created by a grouping.
In this embodiment, the motion parameter of the grouped fragment of smallest size is allocated to the fragment resulting from the grouping.
For example, we consider the grouping between two fragments F′i and F′j such that the number of pixels N′i of the fragment F′i is greater than the number N′j of pixels of the fragment F′j. The calculation is speeded up by allocating a motion vector 24′k equal to the vector 24′i to the fragment F′k obtained through the grouping of F′i and of F′j.
These iterative groupings are performed until a specified number of fragments is obtained.
When this grouping operation is completed, a given number of “final” fragments or of regions Ri which characterize the segmentation of the image according to this method is then obtained.
The set of pixels P included in a region Ri is then homogeneous in terms of motion, this parameter being characterized by a unique model 24i for the entire set of pixels of the region Ri.
Before transmitting (or recording) this segmentation, a marking operation 33 is then performed, in the course of which the regions making up the image 14i are identified. Thus, when the image 16i posterior to the image 14i is analysed, it will be possible to use this segmentation to undertake the operation 26 with the image 16i.
To do this, a last step 35 is required in the course of which this segmentation 35 is assigned a delay corresponding to the lag in the appearance of the next image 16i.
As mentioned previously, the fragmentation F of the image into fragments Fi is stopped according to a number-of-fragments-obtained criterion. Thus, when the number of fragments obtained reaches a certain threshold, the merging method stops.
The present invention results from the finding that this stoppage criterion does not yield an optimized result. Specifically, according to this method, the fragmentation of a very simple image produces an excessive fragmentation of the latter while the fragmentation of a complex image—comprising multiple details—produces an inadequate fragmentation.