Since the beginning of photography, image stabilization has been a consistent issue. The problem of ensuring that a photograph is not blurred due to camera movement has remained an issue even as camera technology has progressed to the present day. In digital cameras, the problem of image stabilization stems from fact that any known image sensor needs to have the image projected on it during a period of time referred to herein as integration time. Any motion of the camera during this time causes a shift of the image projected on the sensor, resulting in a degradation of the final image. This degradation is referred to herein as motion blur.
One of the principal difficulties in restoring motion blurred images involves the fact that the motion blur is different in each degraded image. The level of motion blur depends upon the camera motion that takes place during the exposure time.
The ongoing development and miniaturization of consumer devices that have image acquisition capabilities increases the need for robust and efficient image stabilization solutions. The need is driven by two main factors. The first factor is the inherent difficulty in avoiding unwanted motion during the integration time when using a small hand-held device, such a camera telephone. The second factor is the need for longer integration times due to the small pixel area that results from the miniaturization of the image sensors in conjunction with the increase in image resolution. The smaller the pixel area, the fewer photons/second can be captured by the pixel. Therefore, a longer integration time is needed for satisfactory results.
Currently, there are two categories of conventional solutions for addressing image stabilization. These solutions are referred to as single-shot and multi-shot solutions. Single-shot solutions are solutions based upon capturing a single image shot during a long exposure time. This is the classical system for image capturing, where the acquired image is typically corrupted by motion blur, caused by the motion that took place during the exposure time. In order to restore the image, it is necessary to have very accurate knowledge about the motion that took place during the exposure time. Consequently, this approach may require expensive motion sensors (i.e., in the form of gyroscopes), which are also large in size and therefore difficult to include in small devices. In addition, if the exposure time is large, then the position information derived from the motion sensor output exhibits a bias drift error with respect to the true value. The bias drift error accumulates over time such that the outcome of the process can be significantly corrupted over time.
Several entities have implemented a particular type of single-shot solution in high-end cameras. This approach involves compensating for the motion by moving the optics or the sensor in order to keep the image projected onto the same position of the sensor during the exposure time. However, this solution also suffers from system drift error and is therefore not practical for long exposure times.
In contrast, multi-shot solutions are based upon dividing a long exposure time into several shorter intervals by capturing several image shots of the same scene. The exposure time for each shot is small in order to reduce the motion blur degradation of the individual shots. After capturing all these shots, the final image is calculated in two steps. The first step involves registering all image shots with respect to the first image shot. This is referred to as the registration step. The second step, referred to as pixel fusion, involves calculating the value of each pixel in the final image based upon its values in each of the individual shots. One simple method of pixel fusion involves calculating the final value of each pixel as the average of its values in the individual shots.
Although resolving some of the issues discussed above, multi-shot solutions require a large amount of computational resources in order to capture several high resolution frames during a short interval of time. In addition, these methods also require a large amount memory in order to store the captured image shots before the pixel fusion step. This can be especially expensive to implement in smaller devices where memory resources may be quite limited.