The present invention relates to an image sensor device.
Image sensor device is understood to mean any device making it possible to capture views of real objects. For example, it may be a camera, a video camera, a cellular phone equipped with a camera, etc.
For a digital device to capture images, it uses an array of photosites (also referred to herein as “pixels”) which when exposed to light generate a current (or voltage) that is then converted into a digital value by an analog-to-digital converter. Photosites can, for example, include photodiodes, transistors, diodes, capacitors, resistors, etc.
The path traveled by the information contained in the image is considered to be from the entry of the light beam into the device, toward the analog or digital processing and then storage of the digital data of the image. The device therefore receives the light data before the data is processed electronically.
To adjust the sensitivity of a photosite to the amount of light in the scene to be captured, the light integration time is adjusted across the entire sensor.
However, most scenes contain light and dark areas which will not be correctly rendered if the general sensor sensitivity is set based on the average brightness of the scene. To improve image rendering, increasing the dynamic range of the images, for example from 8 to 16 bits, allows encoding a greater number of different luminance levels in an image.
However, this method is complicated because it requires having a high precision converter and a large digital file format. It also requires significant computing and memory resources.
To capture an image of a scene in the visible spectrum, cameras and video cameras adjust their sensitivity to predefined conditions (called “digital ISO” for such devices). However, a scene often contains areas of very different brightnesses. In this case, and depending on the setting selected, brightly lit areas may be saturated and appear white and/or dimly lit areas may appear black.
To improve the rendering of scenes whose dynamic ranges vary widely from one area to another, usually several images are captured at a low dynamic range (typically 8-10 bits) with various exposure factors, which are then combined to obtain a high dynamic range image (32 bits).
A tone mapping operation is then preferred for encoding this high dynamic image for display formats (8 bits). This technique therefore requires computing means, as well as memory, and poses problems when there is movement in the scene between captures.
A high dynamic range sensor (maximum 14 bits) can also be used, but the analog-to-digital converter for this is expensive, and the tone mapping must be processed.
Local adaptation has been suggested, which would allow processing the tone mapping on the fly according to local exposure conditions. In particular, an adaptive and local analog gain control has been suggested. This control is achieved, for example, by weighting the signals received from the photosites by an average corresponding to the received signals for photosites within a local area. The local regulation steps are then carried out before quantification. The regulation is therefore performed on continuous signals and does not imply any loss of information due to quantification. Satisfactory results have been obtained with such a solution, in particular with a high level of detail in dark areas, which usually have a high level of noise.
However, such control requires incorporating analog components to perform the regulation calculations.
Physically, there conventionally exists an integrator circuit for each photosite value read at a given instant. To avoid having to provide as many integrator circuits as there are photosites, sequential reads are performed, row by row, of the photosite values for all the columns. There are thus as many integrator circuits as there are columns. For example, for a photosite array of K rows and L columns, we will first read the L values of the first row, then the L values of the second row, etc. It is thus possible to reduce the number of integrator circuits to L.
In particular, the analog calculation of an average value corresponding to a local area requires having all the photosite values of this local area simultaneously. It is thus no longer possible to have as many integrator circuits as there are columns. For example, to have the photosite values simultaneously for a square of four photosites, there must be 4 integrator circuits: two for the first row and two for the second row. At the scale of the photosite array, there must be two times as many integrator circuits as for normal sensors. It is therefore necessary to provide additional locations for these additional components in the design drawings (the “layout”).
To find available room for these additional components, it has been suggested to include these components within the photosite array itself. For example, it has been suggested to calculate the local average over a “blind” photosite of the photosite array. Other implementations where the regulation means are directly included in the photosite array have also been proposed.
However, the design of photosite arrays is extremely specific and optimized, in particular in relation to the resolution of the captured images. It is therefore very complex and expensive to place, directly on the sensor, a blind photosite or regulation means for each area to be regulated.
Furthermore, the addition of the blind photosite or regulation means decreases the fill factor of the photosite (ratio between the photosensitive surface area of the photosite and the total surface area of the photosite). This results in a decrease in the electro-optical performance of this photosite, and thus impacts the overall performance of the sensor.