Currently, texture synthesis is one of the fundamental problems in computer graphics. In general, the fundamental goal of texture synthesis is to generate textures that meet the requirements of people. Texture synthesis has a wide range of applications (as the typical applications shown in FIG. 1) across photorealistic and non-photorealistic rendering, as well as image restoring, artistic style transfer, network fast transmission of compressed data and computer animation etc. Texture synthesis technique has two families of methods: procedural methods based on parameterized procedures and exemplar-based non-parametric methods. Procedural methods usually have complicated non-intuitive parameters, therefore current work generally lies in the family of exemplar-based texture synthesis. A typical exemplar-based texture synthesis (as shown in FIG. 2) is to synthesize a large texture image from a limited size input exemplar.
During the past two decades, there has been tremendous progress in example-based texture synthesis methods. Most of these methods inherently assume that the input exemplar textures are homogeneous or stationary. However, many surfaces in the real world are, in fact, inhomogeneous, and contain some form of spatial variation that manifests itself in perceptible changes in color, lighting, pattern, and the size and orientation of texture elements, which may gradually evolve across the spatial extent of the exemplar. Such spatially variant behaviors are referred as progressions in the present invention. As demonstrated in FIG. 3, most existing synthesis methods operate in a local fashion and are therefore not well equipped to automatically handle these more global phenomena. Furthermore, texture artists, the intended users of texture synthesis, are rarely interested in merely synthesizing a larger texture from an exemplar; rather, their typical goal is to produce textures intended for specific 3D models, which requires good control over the synthesized result. However, example-based texture synthesis methods often lack simple and intuitive means of user control. In consequence, some attention has been paid over the years to controlling the synthesis of inhomogeneous textures.
A common way to add control to existing methods is by providing manually created feature or guidance channels both for the exemplar and the output texture. The guidance channels dictate, in a “texture-by-numbers” style, where specific content from the exemplar should be placed in the output texture. The manual nature of this workflow is tedious, especially when attempting to annotate continuous progressions. The recent self-tuning texture optimization approach automatically computes a guidance channel, however, it is designed to help preserve large scale structures, such as curve-like features, rather than to control gradual spatial changes. Thus, there's still lack of a method for controlled synthesis of inhomogeneous textures.