An image may be considered as a two-dimensional representation of a scene. Traditionally, imaging devices such as cameras captured images on a film. More recently, digital cameras have enjoyed increasing popularity and demand. A digital camera uses a solid-state device to capture the light coming in through the lens in order to capture a scene. The solid state device may be referred to as a sensor. There are different types of sensor, for example complementary metal oxide semiconductor (CMOS) sensors, or charge coupled device (CCD) sensors. The sensor comprises a number of pixels, and each pixel may comprise a photodetector (e.g. a photodiode). The pixels may be arranged in an array. When light is incident on the sensor, each photodiode may release a number of electrons in proportion to the photon flux density incident on that photodiode. The electrons released for each photodiode may subsequently be converted into a voltage (e.g. in the case of CMOS sensors) or a charge (e.g. in the case of CCD sensors) associated with each pixel which can then be processed to form a digital representation of the captured scene.
A schematic illustration of an example imaging device capturing a scene is shown in FIG. 1. In this example the imaging device is denoted generally at 100 and comprises a lens 104 and image sensor 106. Light from a scene to be captured 102 passes through lens 104. The lens focuses the light that passes through it onto a sensor 106. The sensor comprises a plurality of pixels arranged as an array indicated at 108. The amount of light that passes through the lens may be controlled by an aperture of the imaging device (not shown). A single lens is shown here for the purposes of illustration, but it will be appreciated that there may be an arrangement of lenses between the aperture and the sensor 106.
There are a number of factors that govern the performance of the sensor 106. One factor may be the resolution of the sensor, which is dependent on the number of pixels in the sensor array. For example, a greater number of pixels may enable more detail of the scene to be captured. Another factor may be the size of the pixels themselves. As well as a photodiode, each pixel may contain other components such as one or more transistors. The transistors may for example be used to convert the charge of the released electrons to a voltage, to reset the photodiode to allow a new image of the scene to be captured and/or to transfer the voltage to other circuitry for processing. Consequently a portion of the photodiode area may be blocked by the other components of the pixel. The proportion of the photodiode area of each pixel that is unobstructed (and thus capable of absorbing photons) may be referred to as the fill factor of the sensor. A lower fill factor may result in each pixel releasing a reduced number of electrons for a given intensity of light, which may lead to a corresponding decrease in the signal-to-noise ratio for the sensor.
It may be the case that there are limitations on the physical size of the sensor to be employed in an imaging device. These limitations may be set by power requirements, cost etc. Limiting the size of the sensor means that to optimise the performance of the sensor a balancing act may be required between the number of pixels in the array and the individual pixel size. For example, if the number of pixels in the array is increased at the expense of individual pixel size, the signal-to-noise ratio of the captured image may be unacceptably low. Conversely, if the size of the pixels is increased at the expense of the number of pixels in the array, the level of detail of the captured image may be unacceptably low.