Recent years have seen rapid development in the field of digital image editing. Indeed, due to advances in algorithms and hardware, conventional digital editing systems are now able to digitally modify a variety of image characteristics with simple user inputs. For example, conventional digital editing systems can apply filters to modify the appearance of digital images or add an object portrayed in a first digital image to a second digital image.
Although conventional digital editing systems have progressed in recent years, they still have several significant shortcomings. For example, conventional digital editing systems can apply filters or add an object to a digital image, but struggle to generate modified digital images that reflect realistic appearance models for a variety of different target image properties. To illustrate, conventional digital editing systems can apply a filter to a digital image, but struggle to modify a material of an object portrayed in a digital image such that the material accurately reflects the environment of the digital image (e.g., reflects the new material as if placed in the illumination environment of the original digital image). Similarly, although conventional digital editing systems can add or remove an object from a digital image, such systems do not accurately modify an object portrayed in a digital image such that the modified object reflects the environment of the digital image (e.g., the modified object appears as if placed in the illumination environment of the original digital image). Moreover, although conventional digital editing systems can modify a color or appearance of a digital image, such systems typically do not modify an illumination environment of the digital image such that objects portrayed in the digital image change to accurately reflect the modified illumination environment.
Some conventional systems have sought to address these problems, but each introduce their own limitations and concerns. For example, some digital image decompositions systems seek to identify and modify physical properties of scenes portrayed in a digital image by making simplifying assumptions regarding the digital image. For example, some digital image decomposition systems assume geometry of objects portrayed in digital images, assume material properties of objects portrayed in digital images (e.g., assume diffuse materials), or assume lighting conditions (e.g., low frequency lighting) to reduce the complexity of decomposing physical properties portrayed in a digital image. Such simplifying assumptions may allow for increased ability to identify physical properties and modify a digital image, but they also introduce inaccuracies and limit the circumstances in which such systems apply.
In addition, some digital image categorization systems leverage machine learning processes to edit digital images. However, these solutions also introduce their own shortcomings. For example, digital image editing systems that utilize machine learning are generally limited to modeling diffuse materials only (i.e., such systems cannot operate with specular materials). In addition, digital image editing systems are often unable to operate in conjunction with advanced material properties, which are generally not differentiable, and impose difficulties in training neural networks. Similarly, digital image categorization systems that utilize machine learning to infer properties from a digital image generally represent these properties in a latent feature space of the neural network. Thus, although such systems can edit digital images, they cannot easily manipulate physical properties because such properties are intrinsically represented as latent features (or otherwise combined) within layers of the neural network.
These and other problems exist with regard to digital image editing.