1. Field of the Invention
The present invention relates generally to scanning devices, and more particularly is an optical tracking sensor method such as may be used in a computer mouse.
2. Description of the Prior Art
An optical sensor that can detect relative motion and position is very useful as a component of an optical computer mouse, and in other very useful optical tracking applications. The purpose of the optical sensor is to detect relative motion between the sensor and a patterned or textured “work” surface. The optical sensor works by capturing successive images of a patterned and/or textured work surface, and then determining successive displacement vectors.
FIG. 1 shows the basic components of a current art optical mouse system. A light source and an optical waveguide illuminate a pattern in a work surface. The pattern is often microscopic and not readily visible to the naked eye. An optical lens images the work surface onto a focal-plane sensor chip. With an integrated imaging array, analog circuitry, analog-to-digital converter (ADC), and a digital signal processor, the sensor chip converts the optical inputs into x and y displacement vector outputs. These outputs are used to determine the direction and magnitude of the movement of the mouse.
One commonly used method of calculating vector outputs from an optical sensor is “block matching”. The basic concept of the block matching technique is illustrated in FIG. 2. In block matching, images of a block of the work surface (a window of pixels) are taken by an imaging array at two different times. The images are then compared for matching. Perfectly matched blocks indicate identical locations on the work surface. Any displacement found between the two compared image blocks represents the displacement of the sensor, i.e. how much the sensor has moved, relative to the work surface.
An ideal imaging array has pixel voltage outputs that can be represented as follows:Vpixel(i,j,x,y)=S(i+x,j+y)where Vpixel is the voltage output in column i and row j when the sensor is at a horizontal displacement of x and vertical displacement of y with respect to the surface, and S is light reflected towards the imaging sensor from the work surface under uniform illumination. The units of x and y are chosen so that one unit distance on the work surface will be imaged into a distance of one pixel in the sensor. As the sensor moves, x and y will change over time. In the case illustrated in FIG. 2, (x,y) changed from (0,0) to (+1,−2), that is (Δx,Δy)=(+1,−2). Pixel (i,j)=(4,4) can be used as a reference, as it is approximately in the middle of the first frame image. The voltage output of this pixel would beVpixel(4,4,0,0)=S(4,4)which is the same as the voltage of pixel (3, 6) in the second frame:Vpixel(3,6,+1,−2)=S(4,4).
The first frame pixel (4, 4) and the second frame pixel (3, 6) are matching pixels. Their neighboring pixels form a matching block. The negative offset between matching pixels and matching blocks is (−Δi,−Δj)=(+1,−2) which equals the displacement of the sensor relative to the work surface.
The block matching calculation takes the following form:
      min                  Δ        ⁢                                  ⁢        i            ,              Δ        ⁢                                  ⁢        j              ⁢            ∑              i        =        0                    m        -        1              ⁢                  ⁢                  ∑                  j          =          0                          m          -          1                    ⁢                          ⁢                                                                            V                pixel                            ⁡                              (                                                      i                    +                                          Δ                      ⁢                                                                                          ⁢                      i                                                        ,                                      j                    +                                          Δ                      ⁢                                                                                          ⁢                      j                                                        ,                                      x                    +                                          Δ                      ⁢                                                                                          ⁢                      x                                                        ,                                      y                    +                                          Δ                      ⁢                                                                                          ⁢                      y                                                                      )                                      -                                          V                pixel                            ⁡                              (                                  i                  ,                  j                  ,                  x                  ,                  y                                )                                                              ″            where m is the width and height of the blocks and n is typically 1 or 2. The first Vpixel term is the pixel voltage output in the current frame at some offset (Δi,Δj). The second Vpixel term is the pixel voltage output in the reference (a previous) frame. The absolute difference is a measure of the mismatch of the pixel outputs. The summation is taken over all the pixels in the blocks which must remain inside the images. The offset (Δi,Δj) for which the summation is minimal corresponds to the best match. The displacement (Δx,Δy) found by block matching would then be (−Δi,−Δj).
As with most methods, the block matching technique can be implemented and improved in several formats. Accordingly, a chief object of the present invention is to optimize the method of calculating the direction and magnitude of the displacement vectors of the optical sensor, and then processing those outputs.
Another object of the present invention is to provide a real-time adaptive calibration function with the sensor.
Still another object of the present invention is to provide a system that maximizes working dynamic range while minimizing power consumption.