Prior Art approaches for egomotion estimation usually involve detection of persistent features across images, such as described by D. Lowe in “Distinctive Image Features from Scale-Invariant Keypoints,” IJCV, 60(2) pp 91-110, 2004, which is incorporated herein by reference and which describes using a scale-invariant feature transform (SIFT), and by H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, in “Speeded-Up Robust Features”, in Proc. of European Conference on Computer Vision, 2006, which is incorporated herein by reference. These prior art approaches are computationally intensive for detection, and features may not persist across images due to variations in illumination, changing weather conditions, or occlusions. Such features are often matched with corresponding features in the next image frame using random sampling techniques, such as RANSAC, as described by M. Fischler and R. Bolles in “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography”, Communications of the ACM, Vol. 24, No. 6, pp. 381-395, 1981), which is incorporated herein by reference. Such random sampling approaches, which are theoretically guaranteed to robustly estimate a motion model given enough random samplings, often require far too many random samplings to be computationally tractable in real time for low power platforms, such as unmanned autonomous vehicles.
Another “holistic” egomotion method based on the Fourier-Mellin (F-M) transformation, uses properties of geometric transformations in the spectral domain to effectively estimate a motion model for the scene. However, this approach requires computation of a two dimensional (2-D) Fourier Transform of the whole image, which is a computationally expensive operation, and which is not robust to outliers in the generic method. While the F-M based approach can be made more robust, using techniques like random sampling or a trimmed-mean approach, these techniques increase the already high computational cost and make real-time operation on a low power platform infeasible.
What is needed is a computationally efficient apparatus and method for egomotion estimation. The embodiments of the present disclosure answer these and other needs.