The major characteristic of a general traditional man-machine interface such as a keyboard, a mouse, a joystick, a remote control, and a touch screen is that users must use a hand and fingers to touch the mechanical structure of a device for inputting related information including the texts, graphics and other operating instructions to the machine, so as to achieve the effect of man-machine interactions.
In the present invention, the virtual input apparatus is basically defined to use a 3D movement of hand as an inputting method to achieve the effect of inputting information including texts, graphics, and operating instructions. In other words, the 3D hand's movement is used as a man-machine interactive interface.
With reference to FIG. 1 for the schematic view of a virtual reality (VR) glove, the VR glove 1 is a typical device of 3D hand's movement recognition. In order to detect fine movements of fingers of a hand, a general VR glove usually installs a strain gage sensor or a flex sensor (not shown in the figure) at the positions of fingers 2 to measure the physical quantity of bent fingers. In order to pursue the effect of force feedback, the VR glove usually adopts various micro actuators (not shown in the figure). Finally, the VR glove installs a positioning device 3 for measuring the 3D coordinates and orientation of a single position of the glove. Refer to the following related patents for details.
U.S. Pat. No. 4,414,537 (Gray J. Grimes, 1983)
U.S. Pat. No. 5,047,952 (James P. Kramer, 1991)
U.S. Pat. No. 4,988,981 (Tomas G. Zimmerman, 1991)
Although the VR glove has achieved the man-machine communications effect, the structure and control of the VR glove are still too complicated and not applicable to the personal computers, game players, PDAs, mobile phones, and home video equipments that require simple interface operations. Furthermore, the manufacturing cost is relatively high and not affordable by general users, and thus the VR glove is not popular in the consumer market. As to the technology, the positioning device used in the VR glove is nothing more than an electromagnetic or ultrasonic device to avoid interference from the hand's movements, but such position device has drawbacks such as a low response speed that causes an obvious latency in practical operations and a low resistibility of environmental interference. Refer to the following research reports for the details.
Christine Youngblut, etc., Review of Virtual Environment Interface Technology, Chapter 3 and 5, INSTITUTE FOR DEFENSE ANALYSES, 1996
For any virtual input apparatus, a positioning sensor for rapidly recognizing the movements of multiple points on a hand is the primary condition for achieving the virtual input effect. Based on the aforementioned reason, the positioning sensor must have the following characteristics to achieve the practical and popular purpose.
1. The positioning sensor must be able to provide physical quantities (including space coordinates, displacement, velocity and acceleration) of the 3D movements of multiple points of a hand.
2. The positioning sensor must be able to detect a large spatial volume, such that users are allowed to move their hands freely in a relative large space.
3. The positioning sensor must have a visual point tracking capability for automatically tracking the operating position of a user and providing a larger space of operation.
4. The positioning sensor must have a capability of high spatial resolution. The smallest sensible displacement must be up to the order of millimeters in a space where users move their hands.
5. The positioning sensor must have a capability of quick response. The shortest responsible time for detecting the physical quantity of 3D movements of the users' hands must be up to the order of milliseconds in the time frame.
6. The manufacturing cost of the positioning sensor must be as low as a regular computer peripheral.
Based on the foregoing required conditions, the degree of performance achieved by the prior arts is examined. In the past, the technologies capable of measuring the physical quantity of a single-point 3D movement include static electric field, static magnetic field, ultrasonic wave, electromagnetic wave, and trigonometric methods as disclosed in the following related patents:    Static Electric Field Method: U.S. Pat. No. 6,025,726 (Neil Gershenfeld, 2000)    Static Magnetic Field Method: U.S. Pat. No. 4,945,305 (Ernest B. Blood, 1990)    Ultrasonic Wave Method: U.S. Pat. No. 5,214,615 (Will Bauer, 1993)    Electromagnetic Wave Method: U.S. Pat. No. 4,613,866 (Ernest B. Blood, 1986) and U.S. Pat. No. 5,739,812 (Takayasu Mochizuki, 1998)    Trigonometric Method—Image Processing (2D Camera): U.S. Pat. No. 4,928,175 (Henrik Haggren, 1990) and U.S. Pat. No. 6,810,142 (Nobuo Kochi, 2004)    Trigonometric Method−2D Optical Method: U.S. Pat. No. 5,319,387 (Kouhei Yoshikawa, 1994)
The aforementioned technologies more or less cannot satisfy the requirements of a high spatial resolution, a high-speed response, a large detectable volume and a low manufacturing cost, and such technologies are not the subjects for discussion in the present invention. The technology explored by the present invention is the positioning technology based on 1D optics. Unlike the aforementioned technologies, the 1D optical positioning technology can satisfy all requirements of high spatial resolution, a high-speed response, a large detectable volume and a low manufacturing cost. Issued patents of the related 1D optical positioning technology are listed as follows:    U.S. Pat. No. 3,084,261 (Donald K. Wilson, 1963)    U.S. Pat. No. 4,092,072 (Stafford Malcolm Ellis, 1978)    U.S. Pat. No. 4,193,689 (Jean-Claude Reymond, 1980)    U.S. Pat. No. 4,209,254 (Jean-Claude Reymond, 1980)    U.S. Pat. No. 4,419,012 (Michael D. Stephenson, 1983)    U.S. Pat. No. 4,973,156 (Andrew Dainis, 1990)    U.S. Pat. No. 5,198,877 (Waldean A. Schuiz, 1993)    U.S. Pat. No. 5,640,241 (Yasuji Ogawa, 1997)    U.S. Pat. No. 5,642,164 (Yasuji Ogawa, 1997)    U.S. Pat. No. 5,907,395 (Waldean A. Schuiz, 1999)    U.S. Pat. No. 5,920,395 (Waldean A. Schuiz, 1999)    U.S. Pat. No. 6,584,339 B2 (Robert L. Galloway, 2003)    U.S. Pat. No. 6,587,809 B2 (Dennis Majoe, 2003)    U.S. Pat. No. 6,801,637 B2 (Nestor Voronka, 2004)    U.S. Pat. No. 7,072,707 B2 (Robert L. Galloway, 2006)
The positioning technology based on the 1D optics was first disclosed in U.S. Pat. No. 3,084,261 (Donald K. Wilson, 1963). Wilson used two perpendicular 1D cylindrical lenses (or simply referred to as 1D lenses), two triangular and two square silicon photovoltaic cells to achieve the effects of measuring the azimuth and elevation of the sun and automatically tracking the movement of the sun. In 1978, Ellis uses a V-shaped aperture and a linear array of light sensitive elements to achieve the same effect of angular measurement.
In 1980, Reymond first proposed the 3D coordinates positioning technology based on the 1D optics and the major features of the technology are given below:
1. Assembly of Optical System
The optical system comprises three linear positioning sensors composed of a 1D lens, a filter, a linear array of photosensitive elements, and a linear array of photosensitive element signal read circuits, and a method of spatial coordinate calculation. In the spatial arrangement of three linear positioning sensors, the long axes of the linear array of photosensitive elements are disposed at a common plane, and the direction of the long axes of the first and second linear arrays of photosensitive elements are parallel, but the direction of the long axes of the first (second) linear positioning sensor is perpendicular to the direction of the long axes of the third linear positioning sensor.
Theory of Computing 3D Coordinates
The theory of computing 3D coordinates is provided under a condition of the aforementioned common plane. In this method, the positions of a measured point light source, central axis of 1D lens, and image positions of three linear sensor arrays constitute three geometric planes, and the intersection point of three planes can be used for obtaining the coordinates of the point light source.
3. Multi-Point Positioning Effect
The lighting of multiple point light sources is switched alternately and periodically, such that each point light sources will emit light at different time to prevent the image overlapped phenomenon and obtain the correct image corresponding relation among three linear positioning sensors (hereinafter, this technology is referred to as time modulation method for simplicity), so as to achieve the positioning purpose of three point light sources.
4. Signal Process of Measured Data
In the signal reading circuit of a linear sensor array, a threshold comparison circuit is installed to remove unnecessary background light.
In addition, the Reymond's patent also mentioned possible extension of the technology (but not discussed and claimed in the patent) as follows:
5. Extension of Measuring More Points
As to the position measurement of more points, the number of linear positioning sensors can be increased to achieve the positioning purpose of more points.
6. Extension of Spatial Arrangement
As to the arranged positions of the linear sensor arrays, it is not necessary to arrange the linear sensors in a common plane.
For the aforementioned two extensions, Reymond has not taught any theoretical calculation for obtaining the space coordinates of the testing points.
As to the positioning of a 3D point, Reymond's patent fully discloses the principle, architecture and basic technology of the 1D optical positioning system. Later patents, from the patent disclosed by Stephenson in 1983 to the one disclosed by Galloway in 2006, generally continued using Reymond's principle and architecture and their is applications remained in the measuring area without special breakthroughs as described below.
U.S. Pat. No. 4,419,012 (Michael D. Stephenson, 1983)
Basically, this patent is an improvement of a portion of Reymond's patent, and the Stephenson's patent is characterized in the improvement of a synchronization method. In other words, Reymond adopted a wire method to achieve the synchronous purpose between the lighting timing of point light sources and the scanning timing of the linear sensor array. Stephenson adopted a pin diode to monitor the lighting timing of each point light source, so as to synchronously start the scanning timing of the linear sensor array, and thus Stephenson uses a wireless method to achieve the effect of synchronization.
U.S. Pat. No. 4,973,156 (Andrew Dainis, 1990)
Dainis' patent almost adopted the whole concept of Reymond's patent. Although Dainis disclosed a common plane with an angle of 120° for the spatial arrangement of three linear positioning sensors and a common plane with an angle of 45° for the spatial arrangement of four linear positioning sensors, Dainis did not give the detailed theoretical calculation for these two spatial arrangements. In addition, although a simultaneous illumination of multiple points has been mentioned, the physical implementation and method are not taught. Further, as to the image overlapped phenomenon (as disclosed in R.O.C. Pat. Application No.: 096113579), no discussion in this regard has been found.
U.S. Pat. No. 5,198,877 (Waldean A. Schuiz, 1993)
Basically, Schuiz's patent is an application of Reymond's patent, which uses a hand-held 1D laser scanner which scans and projects a linear laser light spots onto a surface of the testing object, and two sets of linear positioning sensors are used for obtaining the relative coordinates of the laser light spots reflected by the testing object, and then three sets of Reymond's linear positioning sensors are used to measure three pilot light emitters installed on the laser scanner, and finally the absolute coordinates of the laser light spots reflected by the testing object can be calculated. Regarding the lighting of multiple point light sources, Schuiz adopts Reymond's method without any innovation. As to the lighting of three pilot light emitters, although Schuiz has mentioned, but not claimed, the way of using a light source with a different wavelength (or different color) and a light source with a different frequency modulation, physical implementations have not been taught.    U.S. Pat. No. 5,640,241 (Yasuji Ogawa, 1997)    U.S. Pat. No. 5,642,164 (Yasuji Ogawa, 1997)
Basically, Ogawa's two patents, which are improvements of a portion of Reymond's patent, have the major characteristic of using a 2D photosensitive array and a combined type 1D lens with the advantage of a reduced simple mechanism, but cannot improve the spatial resolution of any measurement (Note: the resolution does not rely on the use of 1D or 2D photosensitive array, but relies on the size of single pixel on the photosensitive array, the optimization of the point light source, and the setting of other optical parameters), and cannot improve the sampling rate (Note: the use of 2D photosensitive arrays only reduces the sampling rate), and cannot lower the manufacturing cost (Note: the combined type 1D lens incurs a high manufacturing cost), and have no description about the signal processing of the measured data and the deal with a plurality of points.    U.S. Pat. No. 5,907,395 (Waldean A. Schuiz, 1999)    U.S. Pat. No. 5,920,395 (Waldean A. Schuiz, 1999)
Basically, these two patents are applications of Reymond's patent and a supplement for a small portion of Reymond's patent. The supplement resides in the improvement of the point light source. In other words, a spherical or planar diffuser is used for obtaining a point light source with a larger diffusion angle. As to the processing of background light, a software method is used, wherein a signal of the background light is recorded into a memory, and the background light is subtracted from a measured signal in an actual measurement to obtain the original signal. As to the method of lighting up the plurality of point light sources, Reymond's method is adopted without further innovation.    U.S. Pat. No. 6,584,339 B2 (Robert L. Galloway, 2003)    U.S. Pat. No. 6,801,637 B2 (Nestor Voronka, 2004)    U.S. Pat. No. 7,072,707 B2 (Robert L. Galloway, 2006)
Basically, the aforementioned three patents are applications of Reymond's patent, which have no innovation on the positioning technology.
In summary of the aforementioned patents, we can draw the following conclusions:
(1) Theoretical Calculation
As to the theoretical calculation of the 3D coordinates of the point light source, no new theory is provided other than the simple theoretical calculation provided by Reymond's patent. In the academic field, the following related theses were published: Yasuo Yamashita, Three-dimensional Stereometeric Measurement System Using Optical Scanners, Cylindrical Lenses, & Line Sensors, SPIE 361, August 1982.
The theory described by Yamashita is applicable only if the direction of a long axis of the linear sensor array is arranged on a common plane, and the optical axis of the 1D lens optical axis is arranged on a common plane. Yamashita's theory is not a general theory of 3D positioning A general theoretical calculation developed for the linear positioning sensors with the arbitrary arranged position and orientation has been disclosed by the following patents:    R.O.C. Pat. Application No.: 096108692    R.O.C. Pat. Application No.: 096113579    R.O.C. Pat. Application No.: 096116210
(2) Technology
The prior arts disclosed in the foregoing patents cannot break through the patent claims of the Reymond (1980). Particularly for the image overlapped phenomenon, no improvement or innovation was made after the Stephenson's patent (1983).
(3) Application
All foregoing patents are applied for the 3D position measurement, but none of the foregoing patents has disclosed the application of a virtual input. R.O.C. Pat. Application No.: 096116210 has disclosed the use of the 1D optical positioning technology for the application of virtual input, and this patent first disclosed a 3D mouse which uses gestures to achieve the purpose of man-machine interface.