Conventionally, in addition to solid objects such as obstacles and moving objects around an own vehicle, road surface display (road surface paints) such as lane markers and stop lines on a road or solid objects such as traffic lights and signs existing around the road are recognized as information of an external world around the own vehicle and drive control according to an external world peripheral situation of the own vehicle is performed, to realize a self-driving system or a driving-assisting system of a vehicle. To recognize the external world peripheral situation by in-vehicle sensors and perform control, it is necessary to determine vehicles, bicycles, and pedestrians to be types of a plurality of obstacles or a plurality of moving objects around the own vehicle and detect information such as positions and speeds thereof. In addition, when the own vehicle drives, it is necessary to determine meanings of paints such as the lane markers and the stop lines or meanings of the signs on the road. As the in-vehicle sensors to detect the information of the external world around the own vehicle, image recognition units using cameras are effective. However, when an external environment is recognized using the image recognition units using the cameras, detection performances of the image recognition units using the cameras are changed by an external environment around the own vehicle such as weather (rainy weather and foggy weather) and a time zone (night, crepuscule, and backlight). Specifically, for detection of the moving objects, the obstacles, and the road paints around the own vehicle, erroneous detection or non-detection by the image recognition units increases.
When the performances of the image recognition units are degraded, the image recognition units do not recognize whether there are the moving objects, the obstacles, and the road paints around the own vehicle. For this reason, it may be difficult to correctly determine a status of the erroneous detection or the non-detection by only the image recognition units.
For the above problem, an example of a road information recognition device determining a front road shape of the own vehicle finally from a curve shape acquired from a navigation map and a curve shape detected by the image recognition units, with respect to the front road shape of the own vehicle, is described in PTL 1.
In PTL 1, a first road information determination unit obtaining first road information on the basis of map information held by a navigation device, a second road information detection unit detecting a road situation during drive and obtaining second road information on the basis of the road situation, and a road information determination unit obtaining final road information on the basis of the first road information and the second road information are included. Specifically, a front curve is detected from the navigation map with respect to the first road information, the front curve is detected from in-vehicle cameras with respect to the second road information, and the front curve is finally determined from detection results.