Many image processing techniques rely on the detection of features in an image as a preliminary operation. For example, it may be desired to recognize an object in an image and the recognition is typically based on the detected features. As another example, it may be desired to stitch together two images taken by different cameras offering different perspectives of a scene. In this case, features common to both images may be identified and matched to each other to enable the images to be correctly registered relative to one another based on the common feature points. This typically involves some form of scaling, rotation, and alignment of one or both images based on the matched features prior to forming the stitched composition.
Existing techniques for detection (and matching) of image features typically produce acceptable results for conventional undistorted images. In some applications, however, it is useful to work with images provided by omnidirectional cameras or cameras with ultra-wide (e.g., fisheye) lenses. These images are generally subject to relatively high levels of visual distortion in order to capture the wide viewing angles required. Unfortunately, existing feature detection techniques do not work well with such distorted images.
Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.