The advent of computing technology in general and image capturing technologies in particular have allowed numerous images to be captured and analyzed for different applications. One exemplary application is image matching, wherein an image is analyzed to ascertain whether it corresponds to one of a plurality of previously captured images. For instance, image matching applications can be used in connection with facial recognition applications amongst other image matching applications.
A considerable amount of research has been undertaken in the field of image matching. Conventionally, algorithms that perform image matching are designed to be as invariant as possible. That is, if a first image is compared to a second image, wherein both images are of the same scene but under different conditions (lighting, angle, distance, etc.), the image matching algorithm is designed to determine that the two images are of the same scene. These image matching algorithms are typically designed to be invariant with respect to a variety of factors such as scale, orientation and lighting.
Many of the currently used image matching systems/methods are based on features that are extracted from images, wherein a feature is representative of a relatively small portion of a captured image. For instance, a feature may be an edge of a table in an image or some other suitable feature that can be extracted by conventional feature extraction algorithms. Often, the feature extraction algorithm analyzes 6×6 matrices of pixels, and the image is scanned using windows of such size. Some feature extraction algorithms analyze intensity gradients in the defined window of pixels and, where a strong intensity gradient is located, a feature is deemed to have been found.
Once a plurality of features are extracted from an image (e.g., in the order of 200 features that are often described using 256 bytes or more), the extracted features are essentially compared with features extracted from other images to ascertain whether there is a correlation between the image and another image. Again, the comparison algorithms are configured to be invariant with respect to various parameters of the image such that a first image can be found to correlate with a second image without features extracted from the first and second images being identical matches. While several advances have been made in the field of image matching using feature comparison techniques, it is to be understood that comparing features can be computationally expensive, particularly if features extracted from an image are compared to features extracted from images in a relatively large image set.