Digital image sensors based on semiconductor technology, such as matrix sensors based on charge coupled devices (CCD) and complementary metal oxide semiconductor (CMOS) technology, now make it possible to provide a large variety of devices with built-in imaging functions. Such image sensors are used e.g. in digital video and still cameras intended for consumers, as well as in various camera devices connected to computers, such as so-called network cameras. Thanks to their high degree of integration, compact size and low power consumption, particularly CMOS sensors are also very suitable for use in small-size portable devices, such as mobile stations and so-called personal data assistants (PDA).
Particularly in a situation in which a digital camera, i.e. a digital image sensor, is implemented in a device which is small in size (portable) and/or inexpensive in its sale price, it is important to implement the camera function with structures which are as simple as possible. The aim is thus to minimize the space taken by the camera in the device as well as the power consumption and the total manufacturing costs of the device.
One way to simplify the structure of the digital camera is to eliminate a separate, typically mechanically operated shutter which is used in front of the sensor matrix to control the exposure time. In such shutter-free digital cameras, the exposure time is controlled electronically by controlling the functions of the sensor matrix.
In the following, we shall briefly describe the operation of a CMOS sensor based on the use of an electronic shutter, as well as the problems caused by the use of the electronic shutter in practice.
To put it simply, the CMOS image sensor consists of a matrix of photo-sensitive pixels. When light strikes a single pixel, the pixel is charged with an electric charge which is proportional to the amount of incoming light and is further stored in or in connection with said pixel. To read the pixel value, the charge is converted by means of a charge amplifier to a voltage which is further conducted via an analog-to-digital (AD) conversion out of the image sensor.
The exposure time of a single pixel in the CMOS sensor consists of the time during which the pixel is allowed to integrate the electric charge formed by this incoming light. The integration or exposure time starts at the point of time when the previous charge contained in the pixel is first adjusted to zero by a reset function, and ends when the pixel charge is read by a sample function.
To achieve the best possible image quality, all the pixels of the image sensor should be exposed precisely at the same time. In other words, the above-described operations of resetting, integration and reading should be performed simultaneously for all the pixels in the matrix. However, this will result in a very complex structure of the sensor. Furthermore, the transfer of image information in serial and digital form out of the sensor circuit should use a considerably large bandwidth.
Thus, for the above-mentioned reasons, the solution commonly used is to process a CMOS matrix sensor row by row, i.e. to perform the operations of resetting, integration and reading for one pixel row of the matrix at a time. This makes the sensor structure considerably simpler, and thus also the transfer of the image information out of the sensor circuit will take place naturally in serial form, row by row, wherein the requirements set for the image information transfer rate are easier.
However, the row-by-row processing has the drawback that the different rows of the matrix sensor are now exposed at slightly different times. FIG. 1 shows, in principle, the row-by-row processing of the image sensor and its effect on the exposure of the different rows of the sensor.
To start the integration or exposure time of the rows, the rows are reset one by one with a reset function (R). To keep the exposure times of the rows mutually equal, the resetting (R) of successive rows takes place at the same rate at which the rows will be read with the sample function (S) at a later stage. The period of time between the resetting operations (R) of two successive rows is called row processing time (RP). The minimum value for the row processing time (RP) is determined by the rate at which the sensor circuit can transfer image information out of the circuit. Consequently, the row processing time (RP) also indicates the time which is taken between the sample operations (S) of two successive rows.
The row integration or exposure time (RI) can now be formed of suitable multiples of the row processing time (RP) in such a way that the exposure time (RI) is kept equal for all the rows. In the example of FIG. 1, the time selected for the exposure time is 10× the processing time (RP). After the first row (row 0) has integrated light for said exposure time (RI), the system contained in the circuit will read and convert the image information of the pixels of said row into digital format and output it from the circuit. After this, the image information contained in the next rows in the order will be read and outputted in a corresponding way.
As shown in FIG. 1, image information will not be outputted from the sensor circuit before the point of time T2, because the integration time (RI) of the first row 0 has not expired yet. It can be seen from FIG. 1 that during the period between the points of time T1 and T2, there is a period which is common to the exposure of the rows 0 to 4 but is still shorter than the total exposure time (RI) of a single row. However, for example row 0 and row 11 are exposed at a totally different time. Consequently, the difference in the time of exposure is greatest between the first row 0 and the last row n−1 of the sensor.
Consequently, the exposure/integration times (RI) of adjacent rows of the image sensor, to be processed one after the other, are partly overlapping, but the exposure/integration takes place clearly at different times in rows which are far from each other, for example at the upper and lower edges of the sensor.
Row-by-row processing of the image area in the above-described manner is known from prior art as a rolling electronic shutter or as a rolling window shutter.
When the object to be imaged is substantially immovable in relation to the camera or in a slow motion in relation to the processing time of the whole image area (all the rows), and when the lighting is constant with respect to time, the rolling electronic shutter will not cause considerable harm to the imaging and to the image quality.
However, in a situation in which an electronic flash unit (flash unit) is used for illuminating the object during the imaging, considerable problems will be caused to the image quality by the exposure of the sensor rows at different times. The reason for this is that because of the short flash time specific to the flash unit, the illumination produced by the flash unit will significantly change the time when the whole image area of the sensor is exposed/processed.
For example, the duration of a flash in flash units based on a discharge tube, used in pocket cameras or the like, typically varies from some test of microseconds to some hundreds of microseconds. Correspondingly, in a CMOS sensor with VGA resolution (640×480 pixels), to be processed row by row, the processing of the whole image area typically takes several tens of milliseconds when a rolling electronic shutter is used. Now, as the flash of the flash unit is considerably shorter than the processing of the whole image area, this will cause that different rows of the sensor will be exposed in significantly different ways when the flash unit is used, and therefor, the quality of the images taken with the flash unit is impaired.