Digital image compositing is a fundamental task implemented by image processing systems of a computing device as part of digital image editing and graphic design. Digital image compositing involves combining foreground objects and background scenes from different sources to generate a new composite digital image. Conventional techniques used to perform digital image compositing, however, are both computationally inefficient and frustrating to users due to inefficiencies of user interaction supported by these conventional techniques. These inefficiencies and user frustrations are exacerbated by a multitude of diverse digital images that may act as sources for these objects and scenes, which may number in the millions.
Compatibility of a foreground object with a background scene, for instance, may be defined using a wide range of characteristics, the importance of which may differ based on content included in the digital images. In one such example, a viewpoint may have greater importance when inserting a foreground object of a car on a background scene of a road. On the other hand, semantic consistency may have greater importance when composing a skier with a snowy mountain. Conventional techniques, however, focus on a single characteristic or rely on manual extraction of features to define matching criteria. Thus, these conventional techniques are not capable of adapting to different characteristics and the differing relative importance of these characteristics in defining a match for different object categories as described above. Further, these conventional techniques may fail when confronted with “big data” as involved with addressing millions of digital images that may be available as compositing sources, an example of which is a stock digital image system accessible via a network.