Conventional CMOS image sensors are generally of the type referred to as “active pixel” sensors (APS), wherein each pixel comprises a photodetector, an integration capacitor and a readout circuit. The integration capacitor is charged to an initialization voltage value, then discharged through the photodetector at a speed that is dependent on the light intensity incident on this photodetector (equivalently, it may be considered that the capacitor integrates the photogenerated charge). The voltage across the terminals of the integration capacitor is read out after a determined time, allowing said light intensity to be determined, then the pixel is reset. In “digital pixel” architectures, each pixel further comprises an analog-to-digital converter and a memory, and therefore delivers, as output, a digital signal representative of the light intensity to which it has been exposed. In these architectures, all of the pixels are read out at a predefined image acquisition rate. This results in high energy consumption which is undesirable for certain applications, for example in the case of energy-autonomous sensors.
A different approach is that of sensors with temporal coding (the term “event-based” sensor is also used). In this type of imager, each pixel integrates the charge generated by its photodetector until its voltage reaches a threshold; once this condition has been met, it sends a signal allowing it to be identified (its address); it is generally said to “raise a flag”. The light intensity incident on the pixel is determined by measuring the time that has elapsed between the initialization of the integration capacitor and the time at which the flag was raised. In other types of sensors with temporal coding, the condition determining whether the flag is raised may be an event other than the crossing of a threshold, for example a difference with respect to the voltage value of another pixel, to a previously measured voltage value, etc. In all cases, the pixels are read out asynchronously. By way of example, document US 2005/273661 discloses a matrix-array sensor with temporal coding.
In sensors with temporal coding, the asynchronous readout of pixels entails a risk of conflict, since several pixels may attempt to transmit their address at the same time over one and the same communication bus. Typically, this problem is solved by means of an arbitration mechanism, for example implemented by the AER (address event representation) protocol, meaning that events are represented by their address and the time at which the address was transmitted, which requires handshaking. The use of an arbiter makes the structure of the sensor more complex, which has a cost in terms of area of silicon occupied, and substantially increases its energy consumption.
Document FR 3 035 759 describes an image sensor with temporal coding in which redundancy is avoided by virtue of a mechanism for inhibiting the pixels close to a pixel that has raised its flag, and conflicts between the remaining events are prevented by means of the AER protocol.
The article by J. A. Leñero-Bardallo, R. Carmona-Galán and Á. Rodriguez-Vázquez “A high dynamic range image sensor with linear response based on asynchronous event detection”, 2015 European Conference on Circuit Theory and Design, describes an image sensor combining event-based, asynchronous readout and conventional analog readout. The light intensity associated with a pixel is determined from the number of events generated within a predetermined acquisition time and from the “residual” representing the voltage across the terminals of the integrator at the end of this acquisition time. This allows the light intensity dynamic range of the sensor to be increased, but an arbiter remains necessary.
The article by Z. Kalayjian and A. G. Andreou “Asynchronous Communication of 2D Motion Information Using Winner-Takes-All Arbitration”, Analog Integrated Circuits and Signal Processing, 13, 103-109 (1997) describes an image sensor with temporal coding using WTA (winner-takes-all) arbitration instead of an AER protocol. See also the article by N. Massari, S. Arsalan Jawed and M. Gottardi “A Collision-Free Time-to-First Spike Camera Architecture Based on a Winner-Take-All Network”, 18th European Conference on Circuit Theory and Design, 2007 (ECCTD 2007).
In all cases, arbitration is required, as otherwise the quality of the acquired images risks being severely degraded due to conflicts between pixels attempting to transmit their addresses at the same time.