1. Field of the Invention
The present invention relates to an image processing apparatus that applies an image process to a photographic image, and a method thereof.
2. Description of the Related Art
In recent years, digital cameras have made amazing progress, and various functions such as enhancement of the number of pixels, that of sensitivity, and implementation of camera shake correction have advanced. Also, price reductions of digital single-lens reflex cameras have progressed, and are promoting further prevalence of digital cameras.
One advantage of a digital camera over a film camera is, it is to easy to retouch an image after capture since the image is handled as digital data. A digital camera converts light focused on a sensing device such as a CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) sensor into an electrical signal, and saves a photographic image that has undergone an image process inside the camera on a recording medium as digital data. As a data format, a data compression format such as JPEG (Joint Photographic Experts Group) is generally used. In recent years, digital data can be saved in a photographic data format that allows retouching with a higher degree of freedom.
Photographic image data is obtained by converting a signal output from a sensing device into digital data (to be referred to as RAW data hereinafter), without applying demosaicing. Therefore, this RAW data cannot be displayed as a normal image as it is.
The RAW data will be briefly described below. Most sensing apparatuses such as digital cameras and digital video cameras acquire color information of an object by arranging specific color filters in front of respective photoelectric conversion elements of a sensing device. This type will be referred to as a 1 CCD type hereinafter. FIG. 8 is a view showing a Bayer arrangement known as a typical color filter arrangement used in a 1-CCD type digital camera or digital video camera. In case of a 1-CCD type sensing apparatus, it is impossible to obtain, from an element in front of which a filter of a specific color exists, a signal of another color. Hence, a signal of another color is calculated by interpolating signals of neighboring elements. This interpolation process will be referred to as demosaicing hereinafter.
Demosaicing will be described below taking as an example a case in which color filters have the Bayer arrangement shown in FIG. 8. A sensor signal which is obtained from a photoelectric conversion device and includes RGB colors is separated into three planes, that is, R, G, and B planes. FIGS. 9A to 9C are views showing planes obtained by separating a sensor signal. FIG. 9A shows an R plane, FIG. 9B shows a G plane, and FIG. 9C shows a B plane. Note that “zero” is inserted in a pixel whose value is unknown (a pixel corresponding to an element in front of which a filter of a color other than the color of interest is arranged) in each plane.
FIGS. 10A and 10B show convolution filters, which are used to interpolate the respective planes to perform demosaicing. FIG. 10A shows a filter used to interpolate the R and B planes, and FIG. 10B shows a filter used to interpolate the G plane.
FIGS. 11A and 11B show how the G plane is demosaiced. FIG. 11A shows the state of the G plane before demosaicing, in which the value of a central pixel is unknown, and zero is inserted. FIG. 11B shows the state of the G plane after demosaicing, in which the average value of values of upper, lower, right, and left neighboring pixels is assigned to the central pixel whose value is unknown. For the R and B planes as well, a pixel whose value is unknown is interpolated using the values of surrounding pixels as in the G plane.
Note that the demosaicing process can also be attained by various other methods in addition to the aforementioned method. For example, a method of adaptively interpolating an unknown value with the average of the values of upper and lower neighboring pixels or with the average of the values of right and left neighboring pixels instead of the average of the values of upper, lower, right, and left neighboring pixels may be used.
RAW data is temporarily saved in a recording medium at the time of image capture, and then undergoes an image process such as a demosaicing process by software or the like which runs on a personal computer (PC). An image, after the image process, can be displayed or saved in a recording medium after it is converted into a general-purpose data format such as JPEG. That is, the process for using RAW data in a digital camera can be reduced to an exposure process and development process in a film camera.
Using the RAW data, user adjustments of an image process corresponding to development (to be referred to as a development process) are allowed, thus increasing the degree of freedom in retouching. Since the RAW data has a large number of bits per pixel, and is losslessly compressed, a development process with less deterioration of image quality can be attained.
Software used to implement the development process (to be referred to as development software hereinafter) generally includes an interface of a display function of displaying an image after the development process, and an interface of an adjustment function of adjusting parameters for the development process. The parameters for the development process that allow user adjustments include parameters of an edge-emphasis process, those of a blur process, color adjustment parameters, and those associated with demosaicing. The user adjusts the parameters for the development process while observing a displayed image, and updates the display image based on the adjusted parameters for the development process. The user decides the parameters for the development process to obtain a desired display image by repeating this sequence.
Most digital cameras which allow the use of RAW data are high-end models such as digital single-lens reflex cameras. High-end models generally have a large number of photographic pixels, and the size of RAW data becomes quite large. For this reason, the computational load of the development processing is heavy, and a long time is required from when the user adjusts the parameters for the development process until an image that has undergone the image process again is displayed. A technique disclosed in Japanese Patent Laid-Open No. 2004-040559 improves the processing efficiency by applying an image process to an image of a partial region of a photographic image and displaying that image.
However, with the technique disclosed in Japanese Patent Laid-Open No. 2004-040559, since the user decides the parameters for the development process with reference to a partial region of a photographic image (to be referred to as a reference region hereinafter), how the development process influences a region other than the reference region after the parameters for the development process are adjusted is unknown. In other words, when the development process is applied to the entire image after the parameters for the development process are adjusted, an image as the user intended can be obtained in the reference region, but an image that the user intended cannot often be obtained in regions other than the reference region.