Touch panels and electronic whiteboards are computer input device applications where image sensor modules are currently applied. In these applications the sensor images are captured by a computer and analyzed to determine the coordinates of the point pointed by the user (see U.S. Pat. No. 6,421,042).
In the case of a touch panel application, the sensor modules are used in conjunction with an active display such as an LCD or plasma display, or a passive display such as a projector screen. In the case of an active display, the input area of the display defines an active area that the user can interact with, and in the case of the passive display the projected image defines the input area that the user can interact with.
In the case of an electronic whiteboard application, the user writes information on the panel which in turn is transferred to the computer. The area allocated to writing defines the input area that the user can interact with.
In both the touch panel and electronic whiteboard applications described above, the image sensor modules must be located in such a way that they cover the predefined input area of the display or whiteboard. The placement of the image sensor modules in turn defines the parameters that are used by an image processing system to calculate the coordinates of an object within the input area of the panel. Triangulation is used to calculate the coordinates, which uses the interval between the sensor modules and the three independent rotational angles of the sensor modules as parameters.
In the case of a touch panel application, the image processing system extracts the following information from the images captured by the sensor modules.                The location (coordinates) of an object on the panel;        The location of the panel surface in the sensor image; and        Whether an object is touching the panel surface or is close enough to the panel surface to be considered touching.        
The location of the panel surface is typically located at the edge of a fixed band of pixel lines defined by the mechanical alignment of the image sensor module with respect to the surface.
In order for the image processing system to decide when a pointing object touches or is close enough to the panel surface, a virtual layer with a predetermined thickness is defined in the vicinity of the panel surface. This layer is referred to as a detection zone, and a band of pixels are defined on the image sensor to correspond to the detection zone. When the pointing object is within this band, an algorithm determines that the object is touching the surface of the panel. The location and the mechanical alignment of the sensor module with respect to the panel surface are critical for this approach to work.
These devices of the prior art, however, have the following problems: Critical mechanical alignment of image sensor modules is required to define the portion covered by the sensor module that is used by the detection algorithm of the image processing system; Precision housing units are required that work with special mounting brackets that work with only certain panel configurations; The image sensor modules being installed in place as part of the device with an interval determined at the time of manufacturing and with an input area limited to a single area defined by the factory configuration.
Thus, it is impossible to quickly create a computer input interactive area by placing image sensor modules on a surface such as a table by using the conventional arts. That is because of the critical mechanical alignment required to allocate a fixed portion of the pixels to a detection zone for the image processing system.
In the mean time, an input device using image sensor modules to define an arbitrary virtual area in space is not provided today.