The problem of image stabilization dates back to the beginning of photography, and the problem is related to the fact that an image sensor needs a sufficient exposure time to form a reasonably good image. Any motion of the camera during the exposure time causes a shift of the image projected on the image sensor, resulting in a degradation of the formed image. The motion related degradation is called motion blur. Using one or both hands to hold a camera while taking a picture, it is almost impossible to avoid an unwanted camera motion during a reasonably long exposure or integration time. Motion blur is particularly easy to occur when the camera is set at a high zoom ratio when even a small motion could significantly degrade the quality of the acquired image. One of the main difficulties in restoring motion blurred images is due to the fact that the motion blur is different from one image to another, depending on the actual camera motion that took place during the exposure time.
The ongoing development and miniaturization of consumer devices that have image acquisition capabilities increases the need for robust and efficient image stabilization solutions. The need is driven by two main factors:
1. Difficulty to avoid unwanted motion during the integration time when using a small hand-held device (like a camera phone).
2. The need for longer integration times due to the small pixel area resulting from the miniaturization of the image sensors in conjunction with the increase in image resolution. The smaller the pixel area the fewer photons per unit time could be captured by the pixel such that a longer integration time is needed for good results.
Image stabilization is usually carried out in a technique called a single-frame solution. The single-frame solution is based on capturing a single image frame during a long exposure time. This is actually the classical case of image capturing, where the acquired image is typically corrupted by motion blur, caused by the motion that has taken place during the exposure time. In order to restore the image it is necessary to have very accurate knowledge about the motion that took place during the exposure time. Consequently this approach might need quite expensive motion sensors (gyroscopes), which, apart of their costs, are also large in size and hence difficult to include in small devices. In addition, if the exposure time is long then the position information derived from the motion sensor output exhibits a bias drift error with respect to the true value. This error accumulates in time such that at some point may affect significantly the outcome of the system.
In the single-frame solution, a number of methods have been used to reduce or eliminate the motion blur. Optical image stabilization generally involves laterally shifting the image projected on the image sensor in compensation for the camera motion. Shifting of the image can be achieved by one of the following four general techniques:
Lens shift—this optical image stabilization method involves moving one or more lens elements of the optical system in a direction substantially perpendicular to the optical axis of the system;
Image sensor shift—this optical image stabilization method involves moving the image sensor in a direction substantially perpendicular to the optical axis of the optical system;
Liquid prism—this method involves changing a layer of liquid sealed between two parallel plates into a wedge in order to change the optical axis of the system by refraction; and
Camera module tilt—this method keeps all the components in the optical system unchanged while tilting the entire module so as to shift the optical axis in relation to a scene.
In any one of the above-mentioned image stabilization techniques, an actuator mechanism is required to effect the change in the optical axis or the shift of the image sensor. Actuator mechanisms are generally complex, which means that they are expensive and large in size.                Another approach to image stabilization is the multi-frame method. This method is based on dividing a long exposure time into several shorter intervals and capturing several image frames of the same scene in those shorter intervals. The exposure time for each frame is small in order to reduce the motion blur degradation of the individual frames. After capturing all these frames, the final image is calculated in two steps:        
Registration step: register all image frames with respect to one of the images chosen as reference, and                Pixel fusion: calculate the value of each pixel in the final image based on the corresponding values in all individual frames. One simple method of pixel fusion could be to calculate the final value of each pixel as the average of its values in the individual frames.        
The main problems in a typical multi-frame image stabilization solution include:                1. Complex computation in image registration, and        2. Moving objects in the scene: If there are objects in the scene that are moving during the time the image frames are acquired, these objects are distorted in the final image. The distortion consists in pasting together multiple instances of the objects.        
It is desirable to provide a simpler method and system for image stabilization.