Searching for images similar to a certain image among a plurality of images is a well-used feature in today's image retrieval system. Most traditional and common methods of image retrieval utilize some method of manual image annotation which rely on that a person manually adds metadata such as keywords to the images so that retrieval can be performed over the metadata words. Manual image annotation is time-consuming, laborious and expensive.
Instead, Content-based image retrieval (CBIR) system may be employed. In such system, feature descriptors of each image in the system are calculated. One problem that often occurs in such CBIR system is the sheer computational complexity to compare all features descriptors from one image, with all the feature descriptors from all the other images. Using one of the most common feature descriptors, SIFT (Scale-Invariant Feature Transform) can generate more than 20000 descriptors for a single image. Comparing 20000 SIFT vectors in an image, with the same amount of vectors in all images in a system, e.g. 10000 images, results in a very high computational load and a slow CBIR system.
It is thus desired to reduce the computational load and increase the speed of a CBIR system.