Known electronic cameras typically use an image sensor in CMOS or CCD technology which includes a plurality of light-sensitive elements—so-called pixels—which are in particular arranged in rows and columns and which convert light incident through an objective of the camera into electrical signals. This signals can, for example, be charges, currents or voltages—in particular in dependence on the technology used and/or on the processing stage on the image sensor. A respective signal is in this respect proportional to a charge of the respective pixel collected by an exposure.
A read-out circuit, which is usually arranged at the edge of the image field of the image sensor formed by the pixels, receives the signals of the pixels for further processing. In the further processing, the signals of the pixels are usually converted into electrical voltages and subsequently amplified. The amplified signals can then be output by one or more outputs of the image sensor in analog form or digitized using one or more internal analog-to-digital converters and output in digital form.
Known image sensors, however, have a comparatively small dynamic range. Electronic cameras thus only have an intrascene dynamic range of 1:1000, whereas a chemical film can have an intrascene dynamic range of 1:50,000 and more.
To increase the intrascene dynamic range of electronic image sensors, it is known to take a plurality of single images with different exposure times for each image and subsequently to compose the individual images. Since the plurality of individual images are taken sequentially in this process and thus at different times and since the sensor has to be read out in each case between them, spatial falsification effects can arise in moved motifs. for example, reflections of a light source frequently occur in a person's eye. Due to the movement of the person and to the different taking times, the position of such a reflection then moves and can, in an extreme case, even lie outside the eye. Such an image fault is very irritating for a human viewer since the shape of objects is estimated with reference to the position of such highlights.
In addition, it is also known for the increase of the dynamic range of an image sensor to couple the pixels to a read-out circuit having at least one amplifier, said read-out circuit being configured to amplify the at least one signal of a respective pixel with different amplification factors to generate differently amplified signals for the at least one signal of a respective pixel. This is achieved in the prior art in that the read-out circuit for a respective pixel, in particular for a respective column, includes two channels which are separate from one another and have a respective amplifier each to amplify the at least one signal of a respective pixel, with the two channels associated with a respective pixel having different amplifications. One of the two channels can be optimized to high input signals. This can be achieved, for example, by especially modified pixels with capacitances which can be added, such as are generally described in US 2005/0052554 A1, or by pixels with overflow capacitance values, such as are generally described in EP 1 681 850 A1. Additionally or alternatively, the other or one of the two channels can be optimized to a high sensitivity or to a low noise. This can be achieved, for example, by a fixed preamplification. In both cases, a plurality of signals are generated from a charge generated during a single exposure process in a respective pixel so that differently high signals of the respective pixel, which are, however, based on one and the same charge signal, already underlie the different amplifications in the two channels. It is, however, generally sufficient that two channels with different amplifications are present.
The two channels can then be read out and combined independently of one another, with an image with a higher dynamic range arising overall.
This is shown in FIGS. 1a and 1b. The combination of the amplified signals of two channels, of which one channel 101 has a high amplification and one channel 103 has a low amplification, takes place such that, with a short exposure, the amplified signal of the channel 101 with the high amplification underlies an output value 105 for the respective picture element associated with the two channels 101, 103 and, with a long exposure, the amplified signal of the channel 103 with the low amplification underlies an output value 105 for the respective picture element associated with the two channels 101, 103. The combination of the amplified signals preferably takes place after a digitizing of the amplified signals. Further preferably, the respective pixel and/or each of the two channels 101, 103 has an at least substantially linear exposure-signal characteristic as is shown in FIG. 1a. To the extent that the analog signals of the pixels do not vary linearly with the exposure, this can be balanced via a corresponding calibration in the digitizing.
At the transition between short and long exposure, a simple switchover between the two channels 101, 103 is as a rule not sufficient since, due to the usually unavoidable occurrence of offset voltages, deviations from the desired amplifications and/or drifts at the transition which are manufacture induces, a jump 107 in the exposure-output value characteristic would occur such as is shown in FIG. 1b. This results in visible image interference in areas in the image with constantly increasing brightness, for example under a cloudless sky, if the transition is actually in such a surface.
A cross-fading therefore usually occurs in a transition region 109 around the transition by which both the amplified signal of the channel 101 with the high amplification and the amplified signal of the channel 103 with the low amplification are taken into account, with the two amplified signals being offset with respect to one another such that a gentle transition arises such as is shown in the enlarged representation of the transition in FIG. 1b. 
To further reduce the noise in the channel 101 with the high amplification and thus to further increase the dynamic range of the image sensor, the amplification of this channel could be further increased, but then the transition or the transition region 109 at which a switch or cross-fade is made from the channel 101 driven at at least almost full level there to the channel 103 with only driven at a low level there would be displaced to even shorter exposures. However, at these even shorter exposures, the signal quality of the channel 103 with the low amplification is reduced since this channel only has a very small signal there which is only slightly above the noise level of the respective pixel. The image quality would suffer accordingly.
To solve this problem, a third channel would therefore have to be provided which has a middle amplification and, when the channel 101 having the high gain is driven at full level, a sufficiently high signal quality.
The provision of such a third channel, however, leads to a substantially increased construction effort and to higher costs since corresponding means thus also become necessary for the evaluation, analog-to-digital conversion, calibration and/or cross-fading of the third channel to and/or outside the image sensor.