Imaging devices are being incorporated in a wide variety of devices, including digital still image cameras, digital video cameras, cameras designed for desktop and mobile computers (often referred to as “pc cameras”), input devices (e.g., optical navigations sensors in computer mice), handheld electronic devices (e.g., mobile telephones), and other embedded environments. With the increasing trends of minimizing the sizes of devices and combining multiple functionalities into single devices, there is a constant push toward reducing the space required for implementing each and every device functionality, including imaging functionality.
Most imaging devices require large and bulky components that cannot be accommodated in most compact device environments. For example, optical navigation sensors typically are designed to track features in high quality images of areas of a navigation surface that are on the order of one square millimeter and are captured through imaging optics with a magnification in the range of 2:1 to 1:2. In a typical optical navigation sensor design, the imaging optics consist of a single plastic molded lens, and the image sensor consists of a 20×20 photocell array with a 50 micrometer (μm) pitch. An optical navigation sensor module with these imaging components and capable of satisfying these operating specifications typically requires a spacing of more than ten millimeters (mm) between the image sensor and the navigation surface.
The size constraints inherent in traditional optical navigation sensor designs are not a significant issue in application environments such as desktop computer mice. These size constraints, however, will inhibit the incorporation of optical navigation sensor technology in compact application environments, such as handheld electronic devices (e.g., mobile telephones) and other embedded environments. What are needed are imaging systems and methods that are capable of satisfying the significant size constraints of compact imaging application environments.