A user usually interacts with entities in a virtual environment by manipulating a mouse, joystick, wheel, game pad, track ball, or other user input device that causes a virtual entity to move in a specific manner or carry out some other action or function as defined by the software program that produces the virtual environment. The effects of a user's interaction with an entity in the virtual environment are generally visible on a display. For example, a user might use a conventional input device to provide the user input for controlling a virtual entity such as a spaceship or race car that is displayed in the game or virtual environment.
Another form of user input employs displays that are responsive to the touch of a user's finger or a stylus. Touch responsive displays can be pressure activated, responsive to electrical capacitance, changes in magnetic field intensity, employ surface acoustic waves, or respond to other variables that indicate the location of a finger or stylus on the display. Another type of touch sensitive display includes a plurality of optical sensors spaced-apart around the periphery of the display screen so that the location of a finger or stylus touching the screen can be detected. Using one of these touch sensitive displays, a user can more directly control a virtual entity or image that is displayed. For example, the user may touch the displayed virtual entity to select it and then drag the entity to a new position on the touch-sensitive display, or touch a control and drag the control to change some parameter.
However, in most such touch-sensitive displays, the response is only to the touch of the finger or stylus at a point. There is another type of interaction with a virtual environment that might provide a much richer experience for the user. While virtual environments produced, for example, by electronic game software programs often include virtual entities that are displayed on a screen, it would be desirable for the virtual environment to also respond to physical objects that are placed on the display surface. In most prior art touch-sensitive displays, the finger or stylus is treated simply as an alternative type of pointing device used to make selections or drag elements about on the screen. To be truly interactive in responding to physical objects that are placed on it, a display surface should also be able to detect where one or more physical objects are positioned on the surface, as well as be able to detect different types of physical objects placed on the surface, where each object might provide a different interactive experience for the user. However, the capacitive, electromagnetic, optical, or other types of sensors used in conventional touch-sensitive displays typically cannot simultaneously detect the location of more than one finger or object touching the display screen at a time, and thus, would be unable to detect the location or each different type of a plurality of different types of physical objects placed thereon. These prior art touch-sensing systems are generally incapable of detecting more than a point of contact and are unable to detect the shape of an object proximate to or touching the display surface. Even capacitive or resistive, or acoustic surface wave sensing display surfaces that can detect multiple points of contact are unable to image objects placed on a display surface to any reasonable degree of resolution (detail), and most require expensive or relatively complicated coding schemes, than a more desirable simple bar code. Prior art systems of these types cannot detect patterns on an object or detailed shapes that might be used to identify each object among a plurality of different objects that are placed on a display surface.
Another user interface approach that has been developed in the prior art uses cameras mounted to the side and above a horizontal display screen to visually capture an image of a user's finger or other objects that are touching the display screen. This multiple camera mounting configuration is clearly not a compact system that most people would want to use in a residential setting. In addition, the accuracy of this type of multi-camera system in responding to an object that is on or proximate to the display surface depends upon the capability of the software used with the system to visually recognize objects and their location in three-dimensional space. Furthermore, the view of one object by one of the cameras may be blocked by an intervening object. Also it is difficult to deduce if a finger or object has touched the screen, and such a vision sensing system often requires an involved calibration. From an aesthetic viewpoint, objects usable in such a system will not be pleasing to a user because they will need a code that is most likely visible to the user on top of the object, and thus, the manner in which the object is being detected will be clearly evident to the user.
To address many of the problems inherent in the types of touch-sensitive displays discussed above, a user interface platform was developed in the MIT Media Lab, as reported by Brygg Ullmer and Hiroshi Ishii in “The metaDESK: Models and Prototypes for Tangible User Interfaces,” Proceedings of UIST 10/1997:14-17. The metaDESK includes a near-horizontal graphical surface used to display two-dimensional geographical information. A computer vision system inside the desk unit (i.e., below the graphical surface) includes infrared (IR) lamps, an IR camera, a video camera, a video projector, and mirrors. The mirrors reflect the graphical image projected by the projector onto the underside of the graphical display surface. The IR camera can detect passive objects called “phicons” that are placed on the graphical surface. For example, in response to the IR camera detecting an IR marking applied to the bottom of a “Great Dome phicon,” a map of the MIT campus is displayed on the graphical surface, with the actual location of the Great Dome in the map positioned where the Great Dome phicon is located. Moving the Great Dome phicon over the graphical surface manipulates the displayed map by rotating or translating the map in correspondence to the movement of the phicon by a user.
A similar approach to sensing objects on a display surface is disclosed in several papers published by Jun Rekimoto of Sony Computer Science Laboratory, Inc. in collaboration with others. These papers briefly describe a “HoloWall” and a “HoloTable,” both of which use IR light to detect objects that are proximate to or in contact with a display surface on which a rear-projected image is visible. The rear-projection panel, which is vertical in the HoloWall and horizontal in the HoloTable, is semi-opaque and diffusive, so that objects become more clearly visible as they approach and then contact the panel. The objects thus detected can be a user's fingers or hands, or other objects.
Each of the prior art interactive surfaces that use IR light to detect objects that are in contact with or proximate to a display surface employ IR light sources that are disposed apart from the surface and which direct IR light toward the surface from a side of the display surface that is opposite to that on which the objects are placed on or proximate. A problem with this approach is that the IR light illuminating the objects is not as uniform as would be desired. Even if an array of IR light sources is used as a source, the IR light passing through the surface and being reflected back by any object that is proximate to the other side of the surface is substantially different in intensity at different points on the surface. Accordingly, it would be desirable to develop a more uniform source of IR light to illuminate objects placed on or proximate to a display surface to enable the objects and any desired optical properties of the objects to more effectively be detected, based upon the reflected IR light that is received by an IR responsive camera or other suitable light detector. Since it is also possible to use other wavebands of light that are non-visible for this application, the same approach should be usable with other wavebands of light, such as ultraviolet (UV) light, or even visible light.
One approach that has been used for uniformly illuminating text on a surface of a sheet of plastic employs edge lighting. For example, emergency exit signs often use a sheet of acrylic plastic that is specially fabricated to conduct light from an edge throughout the sheet using internal reflections and are designed specifically to allow light to escape through the surfaces in a uniform fashion. Any non-opaque object in front or behind the acrylic plastic is effectively illuminated by a sheet light source. The visible light source has typically been either a fluorescent tube, or surface-mounted light emitting diodes (LEDs). The light source is placed at one edge of the acrylic sheet. To provide optimum light transfer from the light source into the acrylic sheet, the light source should be in contact with the edge of the sheet. One problem with the prior art approach of using surface mounted LEDs is that it is difficult to ensure all of the LEDs that are surface mounted on a printed circuit board strip remain in direct contact with the edge of the acrylic sheet. At least some of the LEDs that are surface mounted can easily fail to contact the edge of the sheet, resulting in non-uniform lighting of the text applied to the surface of the acrylic sheet. While contact between a fluorescent tube and the edge of the acrylic sheet is easier to maintain, fluorescent tubes emit visible light, but none are available that emit only IR light, without any visible light. Accordingly, it would be desirable to provide an edge lighting system and method that ensures each of the light sources that emit a desired waveband of light remains in contact with the edge of the acrylic sheet.