High dynamic range (HDR) is a set of techniques that are used in imaging and photographing and that can be employed to record greater luminance levels than normal photographic techniques. Traditional cameras with no HDR function take photographs with limited dynamic ranges, resulting in losses of scene details. For example, underexposure usually occurs in shadows and overexposure usually occurs in highlights when taking a non-HDR picture. This is because of the limited dynamic range capability of sensors. Usually sensors, including common charge-coupled device (CCD) sensors and complementary metal oxide semiconductor (CMOS) sensors, can acquire a dynamic range of about 1:1000, or 60 dB, of intensities of brightness. That means the maximum change is about 1000 times of the darkest signal. However, many applications require working in wider dynamic range scenes, such as 1:10000, or 80 dB. HDR imaging techniques compensate the losses of details by capturing multiple photographs at different exposure levels and combining them to produce a single image with a broader tonal range. To facilitate displays of HDR images on devices with lower dynamic range, tone mapping methods are applied to produce images with preserved localized contrasts.
To get multiple photographs for HDR imaging, modern cameras offer an automatic exposure bracketing (AEB) feature with a far greater dynamic rage. Based on this feature, it is easy to achieve a set of photographs with incremental exposure levels from underexposure to overexposure.
To display the multiple photographs taken for HDR imaging, a conventional process is accomplished by using application software running on PC. Nowadays, image signal processor (ISP) inside modern cameras has become much more powerful than before, which motivates vendors to develop faster HDR imaging methods to enable a built-in HDR feature. This trend significantly enhances convenience and efficiency of the photography. Besides, HDR video recording becomes possible if the HDR imaging can be calculated and displayed in real time.
A typical HDR process usually contains three steps: (1) estimation of camera response function; (2) combination of multiple exposures of a set of images for HDR imaging and (3) tone mapping for the combined HDR image. The imaging process of the camera is modelled as a non-linear mapping g(X) from scene radiance Ei and exposure configuration kj to pixel brightness Zi,j, denoted by Equation (1).g(Zi,j)=ln Ei+ln kj  Equation (1)wherein kj is associated with aperture A (F-number), exposure time t and ISO speed S. Equation (1) is over determined because there are more equations than unknowns and can be solved by the least squares method. Usually g(X) is implemented with a look up table (LUT) from the gray scale to the radiance map.
After solving Equation (1), the combination of multiple exposures can be represented by Equation (2).
                              lnE          i                =                                            ∑                              j                =                1                            P                        ⁢                                                  ⁢                                          w                ⁡                                  (                                      Z                                          i                      ,                      j                                                        )                                            ⁢                              (                                                      g                    ⁡                                          (                                              Z                                                  i                          ,                          j                                                                    )                                                        -                                      lnk                    j                                                  )                                                                        ∑                              j                =                1                            P                        ⁢                                                  ⁢                          w              ⁡                              (                                  Z                                      i                    ,                    j                                                  )                                                                        Equation        ⁢                                  ⁢                  (          2          )                    where w(X) is a weighting function of the brightness, denoting the weight of Zi,j when recovering a scene radiance. The resulting curves for the typical weighting functions are illustrated in FIG. 12.
There are many tone mappers in the literatures for Equation (2). For a normal display, a simple tone mapping operator is denoted in Equation (3).
                              L          d          ′                =                              L            d                                1            +                          L              d                                                          Equation        ⁢                                  ⁢                  (          3          )                    
where
      L    d    =      aE                  1        N            ⁢                        ∑                      j            =            1                    N                ⁢                                  ⁢                  lnE          j                    is a scaled luminance and α=0.18.
As can be seen from Equations (1)-(3), the conventional HDR imaging methods usually require large amounts of computational resources. For instance, the conventional methods lead to the uses of a least squares method, which is usually solved by the singular value decomposition or a QR decomposition. In addition, Equation (2) and Equation (3) need pixel-wise exponential and logarithm operations. Therefore, the computational complexity becomes the main issue for built-in HDR features, which complexity makes HDR video impossible.
Moreover, the conventional methods for HDR imaging based on the “Red, Green, Blue” (RGB) color space contain several disadvantages. Firstly, because all RGB channels are correlated to the luminance, the estimation of camera response function has to take under all three channels, and is therefore computationally expensive. Secondly, sampling is difficult to cover the range of gray scales. Sampling bias may decrease the performance of the estimation. Thirdly, chromatic noises in low lights may decrease the performance of the estimation. Finally, the tone mapping may lead to color distortion, such as white unbalancing.
Therefore, there is a need for a system and method for combining a set of images into a blended HDR image in a time and cost efficient manner while preserving the quality for HDR imaging.