Digital image processing devices on multi-function devices such as smartphones, or on dedicated digital cameras, use automatic features to increase the quality of an image, such as the preview screen on the digital camera as well as the recorded image and recorded video. This includes features used to set parameters to capture images such as the 3A features of automatic focus (AF) and automatic exposure control (AEC). This also includes other features that modify image data during processing of the captured images, such as the 3A automatic white balance (AWB) feature and other non-3A features or algorithms that use statistics of a frame such as Local Tone Mapping and Global Tree Mapping (LTM and GTM) for dynamic range conversion, as well as Digital Video Stabilization (DVS).
The digital image processing devices use AWB in order to provide accurate colors for pictures reproduced from captured images. AWB is a process that finds or defines the color white in a picture called the white point. The other colors in the picture are determined relative to the white point. The AWB adjusts the gains of different color components (for example, red, green, and blue) with respect to each other in order to present white objects as white, despite the color temperature differences of the image scenes or different sensitivities of the color components.
Automatic focus automatically adjusts the camera lens position to provide the proper focus of the objects in a scene being video recorded. In various implementations, digital cameras may use phase detection autofocus (PDAF also referred to as phase autofocus or phase based autofocus), contrast based autofocus, or both. Phase detection autofocus separates left and right light rays though the camera's lens to sense left and right images. The left and right images are compared, and the difference in image positions on the camera sensors can be used to determine a shift of a camera lens for autofocus. Contrast autofocus measures the luminance contrast over a number of lens positions until a maximum contrast is reached. A difference in intensity between adjacent pixels of the camera's sensor naturally increases with correct image focus so that a maximum contrast indicates the correct focus. Other methods for AF may be used as well.
Automatic exposure control is used to automatically compute and adjust the correct exposure necessary to capture and generate a good quality image. Exposure is the amount of incident light captured by a sensor, and which may be adjusted by adjusting the camera's aperture size and shutter speed as well as neutral density (ND) filter control (when present) and flash power, some of which may be electronic systems rather than mechanical devices. An ND filter is sometimes used with mechanical shutters when the mechanical shutter is not fast enough for the brightest illumination conditions. The AEC also may calculate analog gain, and digital gain when present, that amplify the raw image signal that results from the used exposure time. Together the exposure parameters determine a total exposure time referred to herein as the total exposure. The gains impact a signal level, or brightness, of the RAW image that comes out from the camera sensor. If the total exposure is too short, images will appear darker than the actual scene, which is called under-exposure, and can be so under-exposed as to become lost if the signal is less than a noise floor, or is quantized to zero. On the other hand, if the total exposure is too long, the output images will appear brighter than the actual scene, which is called over-exposure. Both cases may result in a loss of detail resulting in a poor quality image.
In order to perform these 3A adjustments and processing modifications provided by other algorithms such as the LTM and GTM mentioned above, a camera system will capture raw image data for a frame, and provide the raw image data to one or more processors, such as an image signal processor (ISP). The ISP then performs computations on the raw image data to generate 3A statistics of the captured image that basically indicate the range and location of luminance and/or chroma pixel values, or provide other statistical tables, maps and/or graphs based on the luminance and chroma values. These 3A and other statistics that are generated for a single frame of a video sequence, and on a frame by frame basis. The statistics then can be used to generate new parameters to set the camera settings for the 3A and other controls for subsequent frames.
Also in the conventional digital camera system, a certain amount of time is consumed to generate the 3A and other statistics, transfer data back and forth from the memories holding the image data and/or the other statistics, such as dynamic random-access memory (DRAM), and compute the new 3A parameters to apply to the camera settings for capture of a next image, or to modify the image data for display of the image. This amount of time usually takes too long resulting in a latency to apply the new 3A parameters so that instead of the new parameters being applied to the next frame after the current frame being analyzed, the new parameters are applied not until at least two or three frames from the current analyzed frame. This delay may result in lower quality images noticeable to a user. Some “on-the-fly” systems attempt to resolve this by reducing memory transactions and applying new parameters to the next frame whether or not the analysis is complete. This, however, also can result in noticeable artifacts and lower quality images.