As solid-state imaging devices (image sensors) using photoelectric conversion elements detecting light and generating a charge, CMOS (complementary metal oxide semiconductor) image sensors have been put into practical use. CMOS image sensors have been widely applied as parts of digital cameras, video cameras, monitoring cameras, medical endoscopes, personal computers (PC), nubile phones and other portable terminals (nubile devices) and other various types of electronic apparatuses.
A CMOS image sensor, for each pixel, has an FD amplifier having a photodiode (photoelectric conversion element) and floating diffusion layer (FD). The mainstream of reading operations of the same is a column parallel output type that selects a certain row in a pixel part (pixel array part) having pixels arranged therein and simultaneously reads than out to the column output direction.
A column output type CMOS image sensor basically has a pixel part (pixel array part) having a plurality of pixels arranged in a two-dimensional matrix, a row driver (vertical scanning circuit) driving a certain one row so as to read out pixel signals in the row address-designated in the pixel part to the column output direction simultaneously and in parallel, a column readout circuit system (column signal chains) applying predetermined signal processing with respect to the read out signals, and a data output circuit. In the column readout circuit, an AEC and other column signal processing circuits are arranged in a column for each column. Further, each column signal processing circuit in the column readout circuit is arranged corresponding to each column output of the pixel part.
Such a CMOS image sensor can be roughly divided into a pixel array part (pixel part) and a peripheral circuit part including a row driver and column readout circuit etc. Conventionally, the pixel array part and the peripheral circuit part were mounted on the sane chip, that is, the peripheral circuits were mounted on a focal plane. As a result, the chip area (projection area) of the CMOS image sensor ended up becoming larger than the pixel array part originally necessary, therefore the problem arose that a small-sized lens or lens holder could not be used and the camera substrate could not be miniaturized to the utmost limit.
Therefore, in order to solve this type of problem, various chip stacking techniques have been proposed. The chip stacking technique stacks two or more substrates (dies), irrespective of whether they are the sane type or different types, to enable physical connection and electrical connection between the substrates (dies).
In a case study of stacking of a CMOS image sensor shown in NPL 1, a pixel array part, column level and row level TSVs, and I/O pad are mounted on a first substrate (CIS die) on which light is incident and column signal chains forming a column readout circuit, row driver, and other peripheral circuits are mounted on a second substrate (ASIC die) on a lower side in the stacking direction.
Further, in the case study of stacking of a CMOS image sensor shown in NPL 2, two, i.e., upper and lower, groups of column signal chains having a finer pitch than the pixels are mounted on and under the second substrate (ASIC die) thereby raising the speed and suppressing increase of the vertical and horizontal sizes.
On the other hand, in the case of the CMOS image sensor shown in NPL 3, it is seen that even if the stacking technique is not used, most of the focal plane is occupied by the pixel array, therefore the ratio of the peripheral circuits is very small. In such a configuration, the optical center, the center of the pixel array, and the center of the chip are positioned at substantially the sane coordinates, therefore excessive space for matching optical axes becomes unnecessary and use of the smallest lens holder becomes possible.