Recently, the demand in thinning wearable devices has been arising. Also, the demand in hand gesture recognition is now more than that in touch screens. Hence, manufacturers in the imaging art are all enthusiastic about the lens module technology in detecting the depth of field.
However, the conventional touch screen can't be applied to detect the change in height in the Z axial direction. Hence, only 2D manipulations are relevant to the touch screen. Namely, 3D manipulations, such as the rotation of a 3D model, can not be performed anyway on the touch screen.
Referring now to FIG. 9, a conventional image ranging system is schematically shown. An infrared signal emitted from a light source 1 is projected onto the object 2, which can be seen as an object to be irradiated by a point light source. The illuminance of the object 2 is inversely proportional to the transmission distance squared and is proportional to cosine value of the incident angle of the light. By treating the reflecting surface of the object 2 as a Lambertian surface, the intensity of the reflected light from the object 2 would be proportional to the cosine value of the reflection angle of the light. The reflected light can be received by an image sensing element placed near the light source. The illuminance of the image sensing element is inversely proportional to the transmission distance squared and is proportional to cosine value of the incident angle of the light. The image sensing element can receive the reflected lights from both the edge and the center of the object 2. The corresponding illuminance ratio would be:cos3 θ×cos θ×cos3 θ=cos7 θ
in which θ is the included angle between the connecting line of the light source 1 and the edge of the object 2 and the normal line of the object 2 which passes through the light source.
In the conventional design, the infrared light source is designed to have a luminous intensity profile characterized by an intensity I0=1/(cos θ)4. Therefore, the illuminance ratio of the light from the edge of the object 2 and the light from the center of the object 2 received by the image sensing element would be:(1/(cos θ)4)×cos7 θ=cos3 θ,
in which θ is the included angle between the connecting line of the light source 1 and the edge of the object 2 and the normal line of the object 2 which passes through the light source.
Obviously, the illuminance distribution is thus not uniform.
In addition, referring to FIG. 10, when the included angle θ of a connecting line of the object 2 and the lens 3 and an optical axis of the image sensing lens 3 is small, the distance z′ between the object 2 and the image sensing lens 3 can be calculated by using the time of flight (TOF) technique, which is approximated to be the horizontal distance z between the object 2 and the image sensing lens 3. At this time, considering that the distance z between the object 2 and the image sensing lens 3 is much greater than the effective focal length f of the image sensing lens 3, then the imaging position of the object 2 would be about on the focal plane of the image sensing lens 3, and thus the vertical distance H of the object 2 with respect to the image sensing lens 3 would be calculated as:H˜z×tan θ=z×(h/f)
However, in the case that the angle θ of the connecting line of the object 2 and the lens 3 and an optical axis of the image sensing lens 3 is larger, the distance z′ between the object 2 and the image sensing lens 3, calculated by the time of flight (TOF) technique, would be far away from the distance z, and then the vertical distance H shall be calculated as:H=z′×tan θ=(z/cos θ)×(h/f)
Therefore, it is apparent that the conventional design upon the approximation assumption for the distance z′ cannot accurately calculate the 3D depth of field for the object.