Such methods are used in driver assistance systems, for example. Driver assistance systems are directed to assisting the driver of a vehicle during vehicle motion.
Such an assistance may be accomplished in the following ways:                displaying the surroundings in the close range of the vehicle to the driver in order to avoid collisions with obstacles that are not located in the driver's field of view;        taking over some of the driver's activities in order to enhance riding comfort during vehicle motion;        monitoring the driver's activities and intervening in the case of a dangerous situation; and/or        automated driving without requiring a driver on board the vehicle.        
The present invention is especially directed to imaging a composite view in a surround view system.
In systems having a plurality of cameras, it proves to be a difficult task to assemble a plurality of camera images. Since images are recorded by sensors in different positions, the changes in perspective result in images that differ from one another. The more distant two sensors are from one another, the greater are the shifting perspectives, and the more relatively dissimilar the two images are.
Different procedures for assembling a plurality of camera images are formulated to produce a common view of a plurality of images. These images are typically captured either by various simultaneously active sensors or by an individual sensor that is active at different positions at successive points in time.
The current state of the art for assembling images of different views is based on a distance estimation that provides a means for correcting the effects of substantial differences among the single images. The distances may be estimated directly from the image data, or result from a fusion with further sensor data (for example, from an ultrasound, LiDAR or radar system).
A method for displaying the area surrounding the motor vehicle in a specific view is discussed in German Patent Application DE 102011082881 A1. It provides for using at least one camera of a vehicle camera system to record first image information of the area surrounding a motor vehicle. In this method, spatial information, such as depth information or spatial coordinates, is determined relative to the recorded first image information. Using the spatial information, the first secondary information is transformed depending on the desired view. Finally, on the basis of the secondary image information, the area surrounding the motor vehicle is displayed in the desired view.
In surround view systems, an image based rendering (IBR) produces each composite view, real images being used to provide an assumed projection surface with textures. This projection surface is typically described using distance information provided by the various sensors.
It turns out that when the distance information is used to deform the projection surface, the dissimilarity between two images is reduced by projecting similar points close to each other.
Nevertheless, one approach is to dynamically change the projection surface, thereby often producing sharp edges that result in an unfavorable visual effect.