When capturing images, illumination conditions of the scene are often of wide range. Thus, for example when capturing a room with a window, areas within the room are relatively dark, while the view out of the window is more enlightened. Providing the same exposure to all areas of the scene will result in under exposure of the room, over exposure which may lead to saturation outdoors, or both.
When using electronic imaging equipment, the exposure time determines the length of time between starting to expose one or more sensors, and the time at which the sensor was measured. If the combination of exposure time and light amount is too high, the sensor will be saturated. However, if the combination is too low, the image or part thereof will be underexposed, and therefore too dark and will exhibit poor signal to noise ratio. If intermediate exposure time is used, the dark areas will be under exposed and introduce poor signal to noise ratio, while the brighter areas will be saturated.
A prior art solution to the problem includes a method in which the sensors are exposed to two or more different exposure times. The resultant images are then combined in various ways. The problems with such solution are in two areas. First, in order to process two frames taken with different exposures, at least the image taken with the first exposure must be saved in some memory device, which is expensive and size consuming. Second, transferring the image from the sensor to the memory requires time. The transfer duration enforces time difference between the end of the first exposure and the beginning of the second exposure. During this period, the imaging apparatus or the subject may move, and the correlation between the exposures would be lower. Moreover, the reading time of the data from the sensor increases the effect of what is known as Shutter Lag. Shutter Lag is the time difference between the beginning of the exposure at the first line, to the beginning of the exposure at the last line. If there is relative movement between the object and the imaging apparatus during the shutter lag, the image would appear distorted, since the areas depicted in the first line will not correspond to the areas depicted in the last lines, which is called the “shutter lag effect”. Another method relates to exposing alternate rows of sensitive areas to long and short periods. However, such method provides poor vertical resolution, since for each area only half of the pixels are sampled.
There is thus a need in the art for a method and apparatus that will enable imaging of scenes having wide intensity range. The method and apparatus should not require significant memory addition to an imaging device, and should enable the imaging device to function ordinarily when capturing images or parts thereof having relatively small intensity ranges.