1. Field of the Invention
The present invention relates to an autonomous vehicle, and an object recognizing method in the autonomous vehicle.
2. Description of the Related Art
An autonomous vehicle that autonomously moves in an environment needs to recognize a position of an object, a direction in which the object is located, and the like in order to realize a mutual action with the object existing in surrounding environments. For example, an unmanned forklift that automatically transports an object, such as a pallet, which is a target, includes a navigator, a camera, and an object recognizing device. The navigator performs a travel control along a path. The camera captures images of the periphery of the navigator. The object recognizing device recognizes the position of the pallet from image data captured with the camera. The unmanned forklift travels up to a target position in accordance with the travel control of the navigator, recognizes the position and the posture of the pallet based on the image data captured with the camera, and executes processes such as transportation of the pallet.
The object recognizing device adapted to recognize the object from the image data captured with the camera is disclosed, for example, in Japanese Patent Application Laid Open H09-218014. The object recognizing device extracts a feature point in the image data, and compares similarity degrees between the feature point and model data of the object to recognize the position of the object. If all the feature points in the image data are to be compared with the model data, high-speed process cannot be carried out. If the number of feature points to be compared is reduced, on the other hand, the recognition accuracy deteriorates.
In particular, the autonomous vehicle such as the unmanned forklift receives data associated with a pallet position and a travel path from a current position of the vehicle to the pallet position, and is travel-controlled based on the received data. The pallet position is a section where the pallet is to be placed in a warehouse, for example, and does not include information on what kind of posture the pallet is placed in. Furthermore, there may be a shift between the travel control of the navigator and the actual travelling. Thus, a range for extracting the feature point needs to be widened to correctly recognize the position of the object in the image data, and as a result, the high-speed processing becomes difficult.