Color is an attribute used in image description, similarity, and retrieval tasks, due to its expressive power and simplicity. There are many color descriptors used for this purpose. However, existing color descriptors have their limitations. Color histogram and color moments do not involve the spatial relationships among colors. The color histogram represents the relative frequency of the occurrence of various color values within an image. Color moments are used to describe fundamental statistical properties of the color distribution. Color coherence vectors (CCV) partition pixels falling in each color histogram bin into coherent and non-coherent pixels. Hence, CCV only involves intra-color structure information. Color correlograms extend the co-occurrence matrix method used in texture analysis to the color domain, and express how the color spatial correlation changes with distance. Full color correlograms (full-CC) are unstable and require too much storage to be practical. A simplified correlogram, called a color auto-correlogram (CAC), adopts the diagonal elements of the full correlogram as features. Therefore, CAC loses the inter-color structure information.
There are cases that these traditional color descriptors cannot discriminate, as shown in FIG. 1. Images 10 and 20 each have three colors, A, B, and C. Color histograms, color moments, CCVs, and CACs, cannot discriminate between the image 10 and the image 20, since the images have the same color histogram and intra-color structural distribution.
A Markov chain is a sequence of randomly observed variables, {Xn, n≧0}, with the Markov property: given the present state, the future and past states are independent. Formally:p(Xn+1|Xn, . . . , X1)=p(Xn+1|Xn).All possible values of Xn form a countable set, S, the state space of the Markov chain. For a K-color-level image, the state space is denoted as S={c1, . . . , ck}.
A Markov chain will totally depend on two basic ingredients: a transition matrix and an initial distribution. For the transition probability from state, ci to cj, denoted as pij=p(X1=cj|X0=ci), the Markov transition matrix, P=(pij)K×K, should follow two properties: (1) pij≧0, ∀ciεS, cjεS, and (2)
            ∑              j        =        1            K        ⁢          p      ij        =  1.According to the probabilities of the Markov transition matrix, the transition probability from the spatial co-occurrence matrix, C=(cij)K×K, is
      p    ij    =            c      ij        /                  ∑                  j          =          1                K            ⁢                        c          ij                .            
Suppose the state distribution after n steps is π(n), the Markov transition matrix should obey the following state transition rule: π(n+1)=π(n)P, π(n)=π(0)Pn, from which the following definition is obtained: A distribution, π, is called a stationary distribution when π=πP is satisfied.
According to a Chapman-Kolmogorov equation, for a stationary distribution, π=πP= . . . =πPn. Hence, the stationary distribution is known as an invariant measure of a Markov chain. The intuitive idea is to adopt the stationary distribution as the compact representation of the Markov chain. However, the existence and uniqueness of the stationary distribution for any Markov transition matrix must be guaranteed.
Concretely, the problem can be answered by the following fundamental limitation theorem: The limitation
  A  =            lim              n        →        ∞              ⁢                  1                  n          +          1                    ⁢              (                  I          +          P          +                      P            2                    +          …          +                      P            n                          )            exists for all state-countable Markov chains. When the chain is regular, A is a matrix whose rows are equal to a unique probabilistic vector (i.e., the elements are all positive and add to 1).
According to the above theorem, it is not hard to show that each row of the matrix A is the stationary distribution for the regular Markov chain. Hence, this theorem tells the existence of a unique stationary distribution, and a way for computing the stationary distribution.