Some image manipulation applications provide functions that allow a user to combine a two-dimensional (“2D”) background image with three-dimensional (“3D”) objects to generate a composed image showing the 3D objects positioned in an environment illustrated by the 2D background image. To create photorealistic composed images, a spherical environment can be created to provide a visual impression that the 3D objects are surrounded by a realistic environment. A 360-degree panoramic image depicting the spherical environment surrounding the 3D objects is referred to herein as an “environment map.”
Creating an environment map, however, is challenging. One of the reasons is that the input 2D background image, based on which the environment map is to be generated, typically contains much less information than an ideal environment map would require. For example, the input 2D background image is typically taken by a camera with a limited field of view, from a particular angle, and at a certain aspect ratio. As a result, the 2D background image contains only a small portion of the actual environment surrounding the camera. Generating the environment map based on such a 2D background image would require extrapolating or creating image content that cannot be observed from the input 2D background image.