When processing images users often need to modify two aspects of the image at the same time. For example, adjusting the contrast and luminance simultaneously or removing noise from an image without affecting the contrast of the image. However, the overall quality of an image when two or more aspects are adjusted is difficult for a user to predict. For example, an adjustment in contrast that might be acceptable on its own becomes either too strong or not strong enough when a luminance adjustment is also made to the image. This problem is made even more complicated when adjusts are performed locally. Users of image editing software often need to resort to trial and error and iterative approaches to obtain a suitable result.
A related problem in the field of image processing is how to predict whether an observer will prefer one image over another and process imagery or guide the processing of imagery to provide the observer with the best possible image. A model of image preference could help the user to adjust multiple aspects of the image by predicting the optimal adjustments of the second aspect (such as luminance) in response to the user's adjustment of a first aspect (such as contrast).
There are many methods known for predicting observer preference between two images where one has undergone a transformation. Preference scores based on signal processing techniques such as Mean Square Error (MSE) and Peak Signal To Noise Ratio (PSNR) have some ability to predict observer preference, but fail in many cases to make reliable predictions. There are known methods that analyse the structural content of images and estimate observer preference based on the notion that observers prefer structural details to be preserved or enhanced in the transformed image. Other methods attempt to use machine learning to predict observer preference based on features extracted from either the original image, the transformed image or both images. Yet other methods attempt to partition the problem of estimating observer preference for an entire image into a sub-set of problems, each attempting to estimate observer preference for a sub-aspect of the image. For example, sub-aspects of an image might include the relative quality of colour reproduction between the original and the transformed image, the quality of brightness reproduction, the quality of detail preservation and the presence or absence of artefacts in the transformed image. By merging the observer preference scores for each of the sub-aspects, a prediction of the overall observer preference score for the transformed image is obtained.
However, known methods to quantify observer preference fail to perform well in many circumstances. Most of the known measures of image quality such as SSIM, PSNR and MSE are “Full Reference” and use the original image as a reference image. These measures consider any deviation of the transformed image from the original image to be a fault to be corrected. Hence these measures cannot be used to process an original image to obtain a transformed image that will be preferred more than the original image by an observer. As this is often the user's goal when adjusting multiple aspects of an image after capture. There are other measures of image quality that do not require a reference (so-called “No Reference” image quality measures). However, these measures generally rely on the detection of specific forms of artefacts such as JPEG blocking artefacts, and so aren't useful to generally process images to improve the rendering. A method has been developed that compares two images and indicates which of the two images is likely to be preferred by an observer and whether an observer will be indifferent to the differences between the two images.
A disadvantage of this method is that the method involves a number of polynomial functions that are not easily inverted to automatically obtain an optimal transformed image or suggest modifications to a second aspect of an image (such as luminance) in response to the user's modification to a first aspect of an image (such as contrast). Optimization methods can be used to compute or determine the optimal transformed image, however there are a number of problems with prior art optimisation methods. Because optimisation methods such as gradient descent or simulated annealing do not take into account the unique form of the function to optimised they would treat each pixel in the original image as a variable to be optimised. For imagery from modern cameras and scanners this results in optimisation problems involving millions or billions of variables. Solving such problems is slow and computationally expensive using existing technology.