An electronic imaging system depends on a lens system to form an image on an image sensor to create an electronic representation of a visual image. Examples of such image sensors include charge coupled device (CCD) image sensors and active pixel sensor (APS) devices. (APS devices are often referred to as CMOS sensors because of the ability to fabricate them in a Complementary Metal Oxide Semiconductor process.) A sensor includes a two-dimensional array of individual picture element sensors, or “pixels.” For color imaging systems, each pixel is typically provided with either a red, green, or blue filter, as for example described by Bayer in commonly-assigned U.S. Pat. No. 3,971,065 so that a full color image can be produced. Regardless of the type of image sensor employed (e.g., CCD or CMOS), the pixel acts as a bucket in which photo-generated charge is accumulated in direct proportion to the amount of light that strikes the pixel during the capture of an image by the electronic imaging system.
The image sensor gathers light for an interval of time called the exposure time or integration time to make a correct exposure during image capture. Based on brightness measurements of the scene, an exposure control system is used to determine a suitable exposure time that will yield an image with effective brightness and an effective signal to noise ratio. The exposure control system may also determine other settings such as a lens aperture setting and an exposure index setting. Generally, the dimmer the scene, the larger the amount of time the electronic imaging system must use to gather light to make a correct exposure.
FIG. 1 shows a flow chart of a typical exposure control system 200 for a digital camera. In assess scene brightness step 210, the camera assesses the scene brightness either with a scene brightness sensor or with an analysis of a preview image. In the typical camera control system shown in FIG. 1, motion is not measured nor taken into account. In determine capture mode step 220, a capture mode setting 225 is determined based on the measured scene brightness and any operator-selected user interface settings. In determine exposure index step 230, an exposure index setting 235 (S) is determined in accordance with the measured scene brightness and the capture mode setting 225. In determine aperture step 240, an aperture setting 245 is determined to control the F/# of the camera lens in accordance with the measured scene brightness, the capture mode setting 225 and the exposure index setting 235. An exposure time setting 255 (TE) is then determined in determine exposure time step 250 in accordance with the measured scene brightness, the capture mode setting 225, the exposure index setting 235 and the aperture setting 245. It should be noted that these steps are not necessarily performed in the order shown in FIG. 1. After the various settings have been determined, a capture digital image step 260 is used to capture and store a digital image 265.
If motion of the image capture device or the scene occurs during image capture, motion blur can result in the captured image as the magnitude of the motion increases relative to the exposure time. There are two types of motion blur: global motion blur and local motion blur. Global motion blur is produced when the image capture device is moving relative to the scene during capture, resulting in the entire image being blurred. Local motion blur is produced when the image capture device is stationary, but one or more objects in the scene are moving. In this case, only the moving object is blurred. Motion blur problems are generally more severe in low light level photography environments due to the fact that longer exposure times are typically required.
A number of methods to reduce global motion blur are known to those in the field. One method is to use an image stabilization system. Such methods typically use an inertial measurement device (e.g., a gyroscope or an accelerometer) to measure the motion of the image capture device during capture and then use a special lens with a lens element that can be moved laterally to cause the image formed by the lens on the image sensor to move in a direction that compensates for the image capture device motion. In other embodiments, the image sensor itself can be moved laterally to compensate for the image capture device motion.
A method that can be used to correct for motion during the capture of video image is described in U.S. Patent Application Publication. 2006/0274156, to Rabbani et al., entitled “Image sequence stabilization method and camera having dual path image sequence stabilization.” This approach is based on a digital shifting of individual frames in a captured video sequence to compensate for movement of the digital camera. While this method cannot reduce motion blur in a single frame, it is effective to stabilize a sequence of captured video images to reduce the effect of camera shake.
None of the above-described methods are effective to reduce the effects of local motion blur. One method to reduce local motion blur is to shorten the exposure time to a setting which is shorter than the value determined by the exposure control system. The resulting images will be darker and have a lower signal-to-noise ratio. An analog or digital gain can then be applied to the pixel values in the image to brighten the darker images, but those skilled in the art will recognize that this will result in noisier images.
Another method to reduce motion blur is to gather more light by using either a lens with a larger aperture or an image sensor with larger pixels, thereby enabling the use of a shorter exposure time. This approach can produce images with reduced motion blur and acceptable noise levels. However, the current industry trend in electronic imaging systems is to make image capture devices more compact and less expensive. High-grade optical elements with large apertures and image sensors with larger pixels are substantially more expensive, and are therefore not practical for many applications.
Another method to reduce motion blur is to supplement the available light with a photographic flash in order to reduce the effective exposure time. A photographic flash produces a strong light flux that is sustained for a small fraction of a second. The actual exposure time can be set to a short value which is marginally longer than the flash duration. Generally, the flash will be the dominant light source, and therefore the flash duration will define the effective exposure time. Therefore, the motion blur caused by either global or local motion during the exposure can be significantly reduced. However, flash photography is typically only useful if the distance between the flash and the scene being photographed is relatively small. Flash photography also tends to produce artifacts such as red eyes, shadows, and very bright areas or dark areas, which many people find objectionable.
U.S. Patent Application Publication 2007/0237514 to Pullman, entitled “Varying camera self-determination based on subject motion,” teaches a method for capturing digital images where motion in the scene is measured prior to image capture. The camera settings are adjusted responsive to the determined scene motion.
In U.S. Patent Application Publication 2007/0237506 to Minema et al., entitled “Image blurring reduction,” a camera is described wherein an image is captured at a slower shutter speed if no camera motion is detected. If camera motion is detected, then an image is captured at a faster shutter speed. While this method does reduce motion blur in images, it does not address the combined effects of motion blur and noise in the image on the perceived image quality of the image in selecting capture conditions including exposure time and ISO.
U.S. Patent Application Publication 2009/0040364 to Rubner, entitled “Adaptive Exposure Control,” teaches using a multiple image capture process to reduce image quality artifacts including motion blur. With this method, a first image is captured using exposure conditions defined by the a conventional exposure control system. The first image is then analyzed for aspects of image quality such as overexposure or underexposure, motion blur, dynamic range or depth of field to determine which aspects have been met and where deficiencies remain. If deficiencies are identified in aspects of image quality, the process determines new exposure parameters and captures an additional image. This process repeats until all the aspects of image quality have been met amongst the multiple images that have been captured. A final image is then constructed by combining portions of the multiple images. This method does not address motion related image quality issues in applications which require capturing only a single digital image.
U.S. Pat. No. 5,598,237 to McIntyre, entitled “Image capture apparatus,” describes an image capture apparatus operable in a hand-held condition and in a stabilized non-hand-held condition. Different exposure parameters are selected depending on whether the camera is being used in the hand-held condition.
U.S. Pat. No. 6,384,976 to Ishijima et al., entitled “Image stabilizing apparatus,” and related U.S. Patent Application Publication 2002/0093739 to Ishijima et al., entitled “Image stabilizing apparatus,” disclose an image stabilization apparatus in which a vibration reduction mode and a panning/tilting mode are selected automatically.
U.S. Pat. No. 7,164,531 to Yamamoto, entitled “Image stabilization apparatus,” describes an image stabilization apparatus comprising an optical system where a portion of the optical elements are controlled to stabilize the optical image while the remaining optical elements are held in a predetermined position.
While image stabilization systems that adjust the position of optical elements or the sensor can substantially reduce the level of global motion blur in a digital image, their use has a number of disadvantages. One disadvantage is that the image stabilization system uses power and therefore drains the battery faster than non-stabilized lens systems. Another disadvantage is that the image stabilization systems have moving parts that can wear out over time, thereby decreasing the lifetime of the camera. Some cameras have a switch that can be used to turn the image stabilization system off when it is not needed, but this requires a manual user action and requires the user to understand what photography conditions would benefit from the use of the image stabilization system. It also makes it likely that the user will forget to engage the image stabilization system during some situations where it would be beneficial and will capture some images with significant image quality degradations.
There remains a need for a digital camera having reduced susceptibility to motion blur that does not have the disadvantages of cameras having image stabilization systems that are constantly operating or must be manually activated.