Technical Field
Implementations and embodiments of the disclosure relate to determining movement of an image sensor or apparatus (e.g., a video camera) between successive video images (frames) captured by said apparatus, such as one incorporated in a platform, such as a digital tablet or a mobile cellular telephone for example, in particular the estimation of ego-motion of said apparatus (i.e., the 3D motion of said apparatus in an environment (and accordingly the ego-motion of the platform incorporating said apparatus)) in a SLAM type algorithm.
Description of the Related Art
Simultaneous Localization And Mapping (SLAM) comprises estimating the ego-motion of the apparatus (and accordingly the platform) and mapping its surrounding scene at the same time.
An ego-motion of an apparatus between two captured images comprises typically a 3D rotation of said apparatus and a position variation of said apparatus in the 3D space.
Conventional SLAM type algorithms try to directly solve at a time the orientation (3D rotation) and the position.
But such a processing is complex because of lack of texture and outliers in images. Such troubles may be overcome by using inertial measurements which are however available for orientation but not for spatial position.