As computer vision and image processing systems become more complex, it is increasingly important to build models in a way that makes it possible to manage this complexity.
Maximum a-posteriori (MAP) inference in graphical models, and especially in random fields defined over image domains, is one of the most useful tools in computer vision and related fields. If all potentials defining the objective are of parametric shape, then in certain cases non-linear optimization is the method of choice for best efficiency. On the other hand, if the potentials are not of a parametric shape, then methods such as loopy belief propagation (BP) or its convex variants are the method of choice. BP and related algorithms face two limitations if the state space is large: first, the intrinsic message passing step requires at least linear time in terms of the state space size, and it is superlinear in general. Thus, the runtime of these methods does not scale well with the state space size. Second, the memory consumption grows linearly with the state space size, since belief propagation requires the maintenance of messages for each state.
If the state space is huge, then even optimizing non-parametric unary potentials (usually referred as data terms) by explicit enumeration may be computationally too expensive for many applications (e.g. when implemented on embedded devices). Certain data terms allow more efficient computation via integral images or running sums, and data terms may need not be computed to full precision, but these methods are only suitable for very specific problem instances.
The present invention seeks to provide improved methods and systems for adjusting an image.