With the existence of many image and video coding standards, recommendations, and extensions thereof (hereinafter collectively referred to as video coding standards), texture compression is a technology that is not that well developed. First, most of the current video coding standards use transform-based methods to encode video pictures. To preserve the details of texture for a particular video picture, such methods require a significant number of bits, which is especially true when the image includes a large texture background.
In general, image areas can be classified as flat regions, edge regions, or texture regions. Most of the current video coding standards adopt transformation-based methods which are generally applied to a video picture, regardless of the types of regions contained in such a picture. When such transformation based methods are applied to a texture region of a video picture, a series of high frequency components are generated which are subsequently discarded, through an operation such as filtering. Hence, it is very difficult to preserve texture details during compression because such regions are typically associated with such high frequency components and such reflect such texture regions.
Although, texture synthesis is used in both computer vision and computer graphics applications, due to its high complexity, texture synthesis is not frequently used for image and video compression operations. In accordance with an approach directed to texture synthesis, the texture can be defined as some visual pattern on an infinite two-dimensional (2-D) plane which, at some scale, has a stationary distribution. The target of texture synthesis can be described as follows: given a particular texture sample, a new texture is synthesized that, when perceived by a human observer, appears to be generated by the same underlying stochastic process as the texture sample.
One way of performing a texture synthesis operation determines global statistics in feature space and sample images from the texture ensemble directly. This texture synthesis operation is a form of model-based coding. To perform such a texture synthesis operation at an encoder, an image is first analyzed for textures and the parameters for such textures are estimated and coded. At a decoder, these parameters are extracted from a data bit stream and are used to reconstruct the modeled textures. Specifically, textures are reconstructed at a decoder by determining a probability distribution from the extracted parameters. The drawback to this approach of texture synthesis is the high complexity, both at the encoder and decoder, required to analyze and reconstruct textures.