It becomes possible to visualize an environment using an overlay of received images from a network on top of images which are received from a camera, together with any additional information which may also be received from the network (such as which music is being played on the devices, etc). This type of enhanced view is frequently referred to as “augmented reality” (AR).
An AR application needs to identify a device that a user requests to obtain the information. Sekai-camera, an example of the AR application, identifies a target device based on the location information of a mobile device which has captured the target device. The location information is computed using the GPS, motion and angle sensors in the mobile device. However, GPS does not provide sufficient accuracy and populating the location of each device bothers the end user and won't be accepted. ARToolKit, another example of the AR application, identifies a target device by utilizing markers to be captured by the camera. However, the end user needs to put the markers on the different places and thus the end user won't accept this solution.
US2010135527 proposes an AR application which identifies a device by performing image matching. According to this application, a mobile internet device compares a captured device image against a plurality of candidate device images stored in an image database to identify the captured device. This solution is superior to Sekai-camera and ARToolKit because the solution need not use location information and markers. However, when the image database stores a lot of device images, the processing time of the image matching will increase and accuracy of the image matching will also degrade. Therefore, it is desirable to improve the processing time and accuracy of image matching.