In current, highly advanced information processing technology and information communication technology, information devices, including personal computers and portable information terminals, have been ubiquitous in the real world, such as in offices and households. In such environments, it is expected that “ubiquitous computing”, where devices are connected with one another and desired information is obtained anytime and anywhere, and augmented reality (AR) systems, where circumstances in the real world (things in the real world, the position of a user, etc.) are actively used, will be realized.
The concept of ubiquitous computing is that, no matter where a person is, the available computer environment remains the same. That is, since ubiquitous computing means “anytime and anywhere”, the ultimate ubiquitous computing does not necessarily require information terminals, such as computers, PDAs (Personal Digital Assistants), or cellular phones.
According to augmented reality systems, it is possible to provide services using real-world information such as the position of the user. In this case, simply by carrying a portable terminal, the system is able to assist every aspect of the user's daily life by using the huge amount of information available on networks by presenting information corresponding to things in the real world in the neighborhood of the user and within the field of view of the user. For example, when the user visits a record store in a shopping mall, by holding up a portable terminal with a camera, recommended newly released records are displayed on the terminal. Furthermore, when the user looks at a signboard of a restaurant, impressions of the dishes are displayed.
When a computer or a peripheral device (i.e., a target such as a user terminal) to which data is transferred over a network is to be specified or when information such as the position of the user and information related to a real-world object is to be obtained, even if the other party is immediately in front, it is necessary to know his/her name (or the ID unique to the device, the network address, the host name, and the resource identifier such as the URL/URI). That is, from the point of view of user operation, computers are coordinated only in an indirect way, and therefore, the user operation somewhat lacks intuitiveness.
As a technique for transferring user identification information and obtaining real-world circumstances, such as the position of the user, while omitting such complicated procedures, techniques using real-world computing, such as visual codes such as “cybercodes” and RF tags, have been proposed. According to these techniques, there is no need for the user to consciously access the network, and instead, the user can obtain information related to an object from the ID of the object, which is gathered automatically.
Here, a “cybercode” is a two-dimensional bar code in a mosaic form, and identification information can be provided by representing each cell by a binary white or black level within a code pattern display area in which cells are arranged in an n×m (e.g., 7×7) matrix. The cybercode recognition procedure includes a step of binarizing a captured image, a step of finding a candidate of a guide bar from within the binary image, a step of searching for a corner cell on the basis of the position and the direction of the guide bar, and a step of decoding the image bit-map pattern in response to the detection of the guide bar and the corner cell.
For example, functions of applications, etc., the device ID, the network address, the host name, the URL, and other object-related information are registered in the cybercode in advance. Then, in response to the recognition of the cybercode from the image taken by a camera, the computer is able to execute a registered application (for example, “activating mail”), to search for the network address of the other party on the basis of the recognized ID in order to automatically make a connection, and to access resources on the basis of the recognized URL.
An RF tag is a device containing unique identification information and a readable/writable storage area. The RF tag has operation characteristics such that radio waves corresponding to the identification information and the stored information are transmitted in response to the reception of radio waves of a specific frequency, and the reading device can read the identification information of a wireless tag and the information stored in the storage area. Therefore, by setting the device ID, the network address, and the host name as the identification information of the wireless tag and by writing the URL and other object-related information in advance in the storage area, the system is able to execute a registered application (for example, “activating mail”), to search for the network address of the other party on the basis of the recognized ID in order to automatically make a connection, and to access resources on the basis of the recognized URL.
However, in a case where visible identification information, such as a visual code, is used, the size of the code varies according to distance. That is, since, the size of the code becomes small as the distance to the object increases, in order to recognize a distant object, a code having a large pattern needs to be formed. In other words, the information transmission technique based on this technique lacks robustness with respect to distance. For example, in order to recognize a building which is far away, it would be necessary to attach a huge code on the building, and this is not practical.
In the case of the RF tag, it is necessary for the user to direct the RF tag to a tag reading device or to bring the RF tag into contact therewith. That is, only an object at a very short distance can be recognized, and distant objects cannot be recognized.
Examples of a simple system for transmitting data and commands include an infrared remote controller. In this case, since the receiver is generally formed of a single pixel, only the existence of transmission data from a transmitter can be determined, the photoreceived signal has no spatial resolution, and the direction in which the transmitter exists cannot be detected. Furthermore, since the single pixel receives noise and data in a mixed manner, the separation of noise and data is difficult, and a frequency filter and a wavelength filter become necessary.
Furthermore, in order to detect the position of an object, a GPS (Global Positioning System) may be used. However, in this case, since only positional information, composed of the latitude and the longitude, can be obtained, in order to determine the azimuth, another means must be included. Furthermore, since it is necessary to receive radio waves from a satellite, it is difficult to use it in cities and indoors. Furthermore, it is not possible to deal with a case in which the position of the user differs from the position of the object at which information is linked, as in the case where a building is specified from a distant location.
In addition, in Japanese Examined Patent Application Publication No. 2001-208511, a configuration is described in which a light-emitting section for measuring position and a light-emitting section for transmitting data are made to emit light at different wavelengths so as to perform communication. In Japanese Examined Patent Application Publication No. 2001-59706, a configuration is described in which the position of a light-emitting source which emits light in synchronization with a synchronization signal is measured. However, in the configurations of these related arts, synchronization between the transmitter and the receiver and the wavelength division of the transmitter are necessary, and simplification of the system configuration, reduction in size, and reduction in power consumption are limited.