1. Field
The present disclosure relates to a mobile robot and method for controlling the same, whereby analyzing images captured by a camera and tracking the position of the mobile robot.
2. Description of the Related Art
In general, mobile robots perform tasks while autonomously moving across a desired area without intervention of the user. The mobile robots have recently been utilized in various areas with development of sensors and controllers, and there exist, for example, cleaning robots, telepresence robots, and security robots.
For the autonomous movement, location awareness is essentially required for the mobile robot to locate its position. Visual odometry is an example of how the mobile robot recognizes its position.
The visual odometry is a process of determining the position and orientation of the mobile robot by analyzing the associated camera images, and has been used in a wide variety of robotic applications.
The visual odometry uses consecutive camera images to estimate the distance traveled by a mobile robot. In other words, camera-based visual odometry may recognize key points in images captured by a camera, and trace the position of the camera or things through relations between the key points in a sequence of images captured by the camera.
Herein, the key points of the images are called features, and tracing the position is called tracking. A process of the mobile robot recognizing its position using the visual odometry is called Visual Simultaneous Localization and Mapping (Visual SLAM).
In the meantime, in the process of acquiring images, the wheeled mobile robot suffers from precision problems since wheels tend to slip and slide on the floor or wobble while traveling on non-smooth floors. In addition, cameras also wobble because the wheeled mobile robot travels with non-standard locomotion.
In this case, the wobble of the cameras of the mobile robot may lead to motion blur in the images taken by the cameras.
When the images are blurred by the motion blur, the mobile robot has difficulty in locating the features, causing the decline of precision in tracking and visual SLAM.