1. Field of the Invention
The present invention relates to a position estimation apparatus for estimating the position of a mobile object having a sensor such as a camera mounted thereon, as well as relates to a position estimation method to be adopted in the position estimation apparatus and a program recording medium used for recording a position estimation program implementing the position estimation method.
2. Description of the Related Art
In a mobile object having a group of sensors mounted thereon as sensors of 2 types, i.e., odometry sensors such as an acceleration sensor and a velocity sensor and image-taking sensors such as a distance sensor, the position and posture of the mobile object itself are estimated by making use of information provided by the group of sensors. Examples of the mobile object are a self-advancing or autonomic robot and a car. A mobile object needs to explore movement routes avoiding obstructions in an environment of the mobile object itself. In this case, the mobile object measures three-dimensional information of the surrounding environment for example by making use of a stereo vision. Then, the mobile object obtains environment information from the three-dimensional information. Finally, the mobile object estimates its own posture and position in the environment of the mobile object by making use of the group of sensors.
For example, Japanese Patent Laid-open No. 2006-11880 (hereinafter referred to as Patent Document 1) discloses a mobile robot capable of creating a threes dimensional map representing occupancy states of three-dimensional grids on the basis of external states detected by an external-state detection means, changing information extracted from a map of surface heights relative to a reference on the basis of information provided by the three-dimensional map and controlling movement operations by taking the changed map information as an environment map and by autonomously determining a movement route. In the case of a car, the position of the car is estimated by the car itself when the GPS or the like is not available. The GPS or the like is not available for example when the car is running through a tunnel.
FIG. 8 is a diagram showing a robot 40 having an odometry sensor 41 and vision sensor 42 mounted thereon in a condition of a fixed (unchanging) environment. The environment is fixed and the positions of feature points 43 to 47 are known. A personal computer 48 adopts an algorithm for estimating the position of the robot 40 on the basis of the positions of the feature points 43 to 47 by making use of a prediction filter such as the Kalman filter or the like. To put it in detail, sensor values generated by the odometry sensor 41 and the vision sensor 42 are typically transmitted to the personal computer 48 for example by making use of radio signals. The personal computer 48 receives the sensor values and estimates the position of the robot 40 on the basis or the known positions of the feature points 43 to 47 by making use of a prediction filter such as the Kalman filter or the like.
For example, “An Introduction to the Kalman Filter,” Greg Welch, Gary Bishop, Technical Report 95-041, Department of Computer Science, University of North Carolina (1995) (hereinafter referred to as Non-Patent Document 1) discloses the Kalman filter. The Kalman filter is a filter used for computing a state, which cannot be observed directly, from indirect information in order to give an optimum estimated value. An example of the computed state is the present state of a car, and the present state of the car is represented by the position, velocity and acceleration of the car. The state is expressed in terms of quantities based on a probability distribution model. As the probability distribution model, for example, a normal distribution can be adopted.
FIG. 9 shows a flowchart representing the algorithm of the Kalman filter. As shown in the figure, the flowchart begins with an initialization step S0 followed by a state prediction step S1. The prediction step S1 is followed by an environment observation step S2, which is succeeded by a position/posture updating step S3. The state prediction step S1 is the step of predicting the state of a robot for the present frame from values generated by the odometry sensor 41 including an acceleration sensor and a velocity sensor as well as a result of estimating the state (or the position) of a system (that is, the robot) for the preceding frame. Since the values output by the odometry sensor 41 and the robot position produced from an image taken by a camera for the preceding frame are past information, however, the robot position (or the robot state) computed from the values output by the odometry sensor 41 and the information originated by the camera is no more than a predicted value. The environment observation step S2 is the step of measuring the environment of the mobile object by making use of the vision sensor 42, which is other than the odometry sensor 41 used at the position prediction step S1. An example of the vision sensor 42 is the camera cited above. To put it concretely, the image of the environment is taken by making use of the vision sensor 42 mounted on the robot 40 and, from the taken image, the feature points 43 to 47 are observed. The feature points 43 to 47 are marks that can be recognized in image processing. Examples of feature points 43 to 47 are eyemarks and landmarks. The position/posture updating step S3 is the step of updating and correcting the posture and/or position of the robot 40 on the basis of observation results obtained at the step S2 as results of observing the feature points 43 to 47. A method of updating the state of the robot 40 is the same as a general extended Kalman filtering method described in Non-Patent Document 1 as a non-linear technique. In this case, the state of the robot 40 is the position and/or posture of the robot 40.