1. Field of the Invention
The present invention relates to an image processing apparatus and an image processing method for appropriately correcting image data. More specifically, the present invention relates to an image processing apparatus, an image processing method, and a computer program for detecting a main subject when photography is performed and detecting a main subject from image data relating to an image obtained by photography and correcting the image data.
2. Description of the Related Art
In recent years, many digital cameras or many printers having high quality photographic printing capability execute a process of analyzing and correcting a photographic image when photography or printing is performed. In particular, a face detecting function or an organ (e.g., eye) detecting function has received attention as a function performed to specify a person who is a main subject in a photographic scene or in an image. For example, in a digital camera, a face in the image is detected, and, as a result, an AF or an exposure is controlled in accordance with the detected face, and, in a printer, a correction for printing is controlled based on data concerning the detected face.
These circumstances have created an environment in which there is input equipment, such as a digital camera, or output equipment, such as a printer, or a PC that includes a highly complex application having a face detecting function.
However, the term “face detecting function” actually has various specifications and features.
For example, a digital camera that has captured an image detects a face during photography, and hence the face is detected as being used for a dynamic image. Therefore, the digital camera employs face detection algorithms and face detection parameters that are required to have a real time capability and a tracking capability with respect to a face region. In particular, the face is detected in combination with an exposure correction during photography. Therefore, for example, in a backlighted scene in which the face is dark, exposure is increased when detection processing is performed, so that the face can be easily detected. On the other hand, in a bright scene, exposure is decreased when detection processing is performed, so that the face can be detected while dynamically changing an input image.
Additionally, the face is detected in combination with an acceleration sensor or with an angle sensor. As a result, the face is detected while limiting a face-detecting direction to an upward direction in which the camera is pointed for photography, and hence detection time can be shortened. Additionally, a user focuses a camera and performs a field angle adjustment and a focus adjustment while seeing face-detection results shown on a viewfinder or on a liquid crystal display, and hence, a satisfactory performance can be maintained in an image detection technique that is not highly accurate. When photography is performed, the user can also determine whether false detection has been performed, and hence, advantageously, the number of such false detecting operations can be finally reduced. Still additionally, since information about a camera-to-subject distance, a focal point, etc., can be immediately observed, total face detection performance can be increased by feeding back this information for face detection.
On the other hand, unlike the digital camera, generally, a printer performs face detection based on data about a still image when an image is output from the printer, and the printer is not required to have a real-time capability. Additionally, since the printer cannot use all information obtained from the focusing like a digital camera, much time is liable to be consumed for detection processing. Therefore, the printer also controls parameters of a face detector while using information of an Exif (Exchangeable image file format) tag on which various pieces of control information of the digital camera obtained when photography is performed are written.
A dominant feature of face detection performed by the printer resides in the fact that a face can be detected while changing a face detection direction or changing a face size little by little because the printer is not required to have a real-time capability.
Additionally, the printer can easily detect a face position, a face size, a face direction, etc., in more detail although the digital camera is merely required to be capable of roughly detecting the face position. Additionally, if processing is performed by, for example, a PC (Personal Computer) having high performance processing, the face can be detected with higher accuracy than in the digital camera.
An environment has been improved in which various equipment, especially both an input device (e.g., digital camera) and an output device (e.g., printer), include a function, such as a face detecting function, to detect a region (i.e., part of a person) to specify a person in image data, and in which these devices differ in their detection performance. Hereinafter, the “region (part of a person) to specify a person” will be referred to as the “main subject.”
To finally obtain a desired image, which is an output result from an output device, by applying optimum correction processing, it is important to more correctly detect a region that is photographically identified with that of a person (for example, a main subject that is a face region). To achieve this, it is possible to increase the detection performance of a region by which a person is specified, such as face detection performance, of the output device. However, increasing the detection performance will cause complications of processing and cause an increase in the processing load. In addition to this problem, a case arises when a main subject is difficult to detect except when an image is obtained by an input device. Therefore, a proposal has been made to employ a technique that uses results of a main-subject detecting function (e.g., face detecting function) installed in both the input device and the output device. This technique makes it possible to more accurately ascertain a person-specifying region (i.e., main subject).
However, if there is a difference between a detection result of the main subject obtained in the input device and a detection result thereof obtained in the output device, problems have occurred in subsequent processing. For example, if a face region cannot be accurately ascertained, disadvantages will arise. For example, the application of inappropriate correction processing will produce an extremely dark or bright image or will cause the loss of color balance. Therefore, strong correction cannot be performed due to the possibility of causing a great change. Therefore, to decrease the influence of these disadvantages, a conventional technique has been required to perform low-level processing at less than a desirable level of correction.
Additionally, as mentioned above, the input device and the output device differ from each other in detection characteristics and in the purpose of use of detection results, and hence a case arises in which a difference in the face detection rate, a difference in the false detection rate, and a difference in the detection performance of a face region are caused. In other words, there is a case in which the detection rate of a region (main subject) to specify a person, the detection performance shown when this region is detected, or the false detection rate is fixed depending on each device.
Therefore, if there is a difference between the detection result of the input device and that of the output device, the problem of whether priority is given to the detection result of the input device or to that of the output device or the problem of how to blend both results together so as to obtain a region (main subject such as a face region) more suitable to be used for correction will be caused.
The present invention has been developed in consideration of these problems. It is therefore an object of the present invention to provide an image processing apparatus and an image processing method capable of appropriately performing correction processing by use of each detection result of a main subject obtained by different devices.