1. Field of the Disclosure
This invention is related to image sensors. In particular, embodiments of the present invention are related to three dimensional image sensors.
2. Background
Interest in three dimensional (3D) cameras is increasing as the popularity 3D applications continues to grow in applications such as imaging, movies, games, computers, user interfaces, and the like. A typical passive way to create 3D images is to use multiple cameras to capture stereo or multiple images. Using the stereo images, objects in the images can be triangulated to create the 3D image. One disadvantage with this triangulation technique is that it is difficult to create 3D images using small devices because there must be a minimum separation distance between each camera in order to create the three dimensional images. In addition, this technique is complex and therefore requires significant computer processing power in order to create the 3D images in real time.
For applications that require the acquisition of 3D images in real time, active depth imaging systems based on the optical time of flight measurement are sometimes utilized. These time of flight systems typically employ a light source that directs light at an object, a sensor that detect the light that is reflected from the object, and a processing unit that calculates the distance to the object based on the round trip time that it takes for light to travel to and from an object. In typical time of flight sensors, photodiodes are often used because of the high transfer efficiency from the photo detection regions to the sensing nodes. Known time of flight sensors typically include two independent copies of photodiodes, reset transistors, source follower transistors and row select transistors for each pixel in order to operate. The inclusion of all of these devices in each pixel of time of flight sensors have the consequence of the time of flight sensors having significantly larger pixel sizes as well as poor fill factors.