Most existing cameras can only capture a limited range of illumination within a single exposure. Wrong selection of camera settings, combined with the physical constraints of camera sensors limit the maximum irradiance value which can be captured and stored as images for a given exposure time and amplifier gain. Therefore, scene irradiance values, which are beyond the maximum capacity of the camera sensor elements, cannot be accurately captured, resulting in over-exposed or color-saturated regions in the captured images.
Saturation of bright image regions of an image is of course undesirable as it visually degrades the image. Missing detail in bright image regions is particularly problematic in the context of high dynamic range (HDR) imaging, and in particular in reverse tone mapping. Recent developments in display capabilities suggest that consumer displays are likely to display HDR content in the near future, requiring solutions for preparing legacy content for this new display modality. As most current reverse tone mapping solutions detect and expand bright regions in the image, it is crucial that missing color information in these regions is recovered in a robust manner.
Image color saturation can take two forms, which are typically treated using significantly different methods. If all three color channels of colors of saturated parts of an image are clipped due to saturation, no robust color information is present in that part of the image and therefore correction methods either rely on probabilistic information obtained from the rest of the image to hallucinate the missing pixels or on filling in the missing information based on a global prior part of this image. On the other hand, if only one or two of the RGB components are saturated in a part of the image, it is possible to reconstruct missing information in clipped color channels using the color information in non-clipped channels or by combining it with information from nearby image regions. The invention concerns this second form of image color saturation.
Because of the strong correlation between color channels specifically in the RGB color space (see E. Reinhard and T. Pouli, “Colour spaces for colour transfer,” in IAPR Computational Color Imaging Workshop, ser. Lecture Notes in Computer Science, R. Schettini, S. Tominaga, and A. Tremeau, Eds. Springer Berlin Heidelberg, 2011—invited paper, vol. 6626, pp. 1-15), most methods related to the second form above process images in the RGB color space. In the simplest case, the global correlation between color channels can be determined in the form of a global Bayesian model and used in the process of recovery of the missing color information due to saturation (see X. Zhang and D. H. Brainard, “Estimation of saturated pixel values in digital color imaging,” JOSA A, vol. 21, no. 12, pp. 2301-2310, 2004).
Correlation between color channels can be exploited by using information in unclipped color channels within the same spatial region. In S. Z. Masood, J. Zhu, and M. F. Tappen, “Automatic correction of saturated regions in photographs using cross-channel correlation”, Computer Graphics Forum, vol. 28, no. 7. Wiley Online Library, 2009, pp. 1861-1869, the Bayesian modelling method of Zhang et al. (see above) is extended and used the relationship between pixels and their neighbourhood to estimate ratios between RGB channels.
This approach relies on the minimization of two cost functions, which significantly increases its computational complexity. In addition, the use of neighbourhood information may result in incorrect hue values.
An object of the invention is to avoid the aforementioned drawbacks.