Recently, in a field of autonomous driving, generally three types of sensors are used for detecting objects surrounding an autonomous vehicle. The three types of sensors may include a Lidar, a radar and a camera. Each of such sensors may have each of shortcomings. For example, a shortcoming of the Lidar may be that it is too expensive to be used widely, that of the radar may be a low performance when used alone, and that of the camera may be that it may be unstable because it is affected much by surrounding circumstances such as weather.
Using each of the sensors separately may have shortcomings as shown above, so a method for a sensor fusion is necessary.
However, so far only a superficial information integration strategy has been studied, thus a method for a substantial sensor fusion has not been much studied.