The invention relates to a vehicle having a surroundings monitoring device, which monitors the surroundings of the vehicle, and contains an image capture device with at least two cameras which captures images of the surroundings of the vehicle, a first camera arrangement of the image capture device, in the case of which a first camera is arranged in the region of a first edge of the vehicle at which a first vehicle side surface and a vehicle front surface or a vehicle rear surface converge, and in the case of which a second camera is arranged in the region of a second edge, which differs from the first edge, of the vehicle at which a second vehicle side surface, which differs from the first vehicle side surface, and the vehicle front surface or the vehicle rear surface converge. The invention also relates to a method for operating a surroundings monitoring device of a vehicle.
A vehicle having a surroundings monitoring device of this type is known for example from DE 10 2012 014 448 A1. In DE 10 2012 014 448 A1, cameras are arranged in each case in the vicinity of a side door of a driver's cab of a utility vehicle and, as viewed in a longitudinal direction of the vehicle, approximately at the position of the side mirrors. The cameras, supplementing the side mirrors, capture images of objects which are situated to the rear in relation to the side mirrors and which are situated in the surroundings of the two side surfaces of the utility vehicle. The captured images are displayed by an image display device in the driver's cab.
The invention is based on the need for further developing an above-described vehicle having a surroundings monitoring device such that an improved monitoring result can be achieved with the least possible outlay. At the same time, there is a need to provide a method for operating a surroundings monitoring device, which method satisfies these requirements.
The invention is based on the concept whereby the first camera arrangement is furthermore arranged such that the image capture area of the first camera encompasses at least a part of the surroundings of the first vehicle side surface and at least a part of the surroundings of the vehicle front surface or at least a part of the surroundings of the vehicle rear surface, and the image capture area of the second camera encompasses at least a part of the surroundings of the second vehicle side surface and at least a part of the surroundings of the vehicle front surface or at least a part of the surroundings of the vehicle rear surface.
Here, the vehicle front surface is to be understood to mean the foremost surface of the vehicle in the direction of travel. In the case of a passenger motor vehicle with a “front nose”, this is typically the front panel with front grille, and in the case of a heavy commercial vehicle with a driver's cab “without a nose”, this is typically the front surface, which includes the windshield, of the driver's cab. Analogously, the vehicle rear surface is to be understood to mean the rearmost surface of the vehicle in the direction of travel. In the case of a passenger motor vehicle of typical “three-box design”, this is the rear panel in the region of the luggage compartment, and in the case of a heavy commercial vehicle, this is for example a rear paneling of the body. In the case of tractor-trailer combinations which together likewise form a vehicle, the vehicle rear surface is then formed by a rear paneling of the trailer or semitrailer body.
Edges of the vehicle are to be understood to mean substantially vertical, edge-like lines of convergence, and also rounded lines of convergence, at which said vehicle surfaces converge on or with one another. The edges accordingly form lines or linear structures on the outer skin or the bodyshell of the vehicle at which the vehicle front surface and the vehicle rear surface transition into the two vehicle side surfaces with a change in direction.
By way of such an arrangement of merely two cameras, it is possible to monitor both the surroundings of the two side surfaces of the vehicle and the surroundings of the vehicle front surface or the surroundings of the vehicle rear surface. This results in a relatively large area of surroundings monitoring of the vehicle using only two cameras.
Furthermore, it is then the case that the first camera and the second camera, which are then arranged horizontally spaced apart from one another, interact as a stereo camera with regard to the surroundings of the vehicle front surface or of the vehicle rear surface, because then, the image capture area of the first camera and the image capture area of the second camera at least partially overlap in the surroundings of the vehicle front surface or in the surroundings of the vehicle rear surface. It is thus possible, by way of image data fusion, to generate a three-dimensional image of an object situated in the monitored surroundings, and/or for the distance of the object from the vehicle to be determined. Such a three-dimensional image is then preferably generated in an image evaluation device and displayed by way of the image display device.
Object identification by way of stereo image capture using at least two cameras generally requires less computational outlay in the two areas of the vehicle front surface and of the vehicle rear surface which are particularly critical with regard to collisions, than if the object identification is performed using only one camera. The stereo-camera-based object identification is preferably performed in combination with object identification algorithms, wherein each individual camera is assigned an object identification algorithm of said type in the image evaluation device. In this way, the robustness of the surroundings monitoring device is increased, because, in the event of a failure of one camera or of the object identification algorithm thereof, redundancy is provided in the form of the other camera or of the object identification algorithm thereof.
Furthermore, such an arrangement is advantageous under difficult light conditions, in the case of which, for example, the image provided by one camera is of poor quality, which can be compensated by the, in some cases, better quality of the image provided by the other camera.
Furthermore, it is then possible for blending masks, such as are used for the amalgamation (“stitching”) of individual captured images, to be dynamically varied on the basis of the determined distance or the determined location of the object, in order to obtain an optimum view of an object in each case.
Advantageous refinements and improvements of the invention are described and claimed herein.
According to an embodiment of the invention, a first camera arrangement having a first camera and a second camera is provided, which first camera and second camera monitor the surroundings of the two side surfaces and, depending on positioning at the front or rear vehicle edges, additionally monitor the surroundings of the vehicle front surface or, alternatively, the surroundings of the vehicle rear surface.
To realize overall monitoring of the surroundings of the vehicle which is advantageous from numerous aspects, one refinement proposes a second camera arrangement of the image capture device, in the case of which a third camera is arranged at a third edge, which differs from the first and second edges, of the vehicle at which the first vehicle side surface and the vehicle front surface or a vehicle rear surface converge, and in the case of which a fourth camera is arranged at a fourth edge, which differs from the first, second and third edges, of the vehicle at which the second vehicle side surface and the vehicle front surface or the vehicle rear surface converge. The image capture area of the third camera encompasses at least a part of the surroundings of the first vehicle side surface and at least a part of the surroundings of the vehicle front surface, if the at least one part of the surroundings of the vehicle front surface is not encompassed by the image capture area of the first camera, or encompasses at least a part of the surroundings of the vehicle rear surface, if the at least one part of the surroundings of the vehicle rear surface is not encompassed by the image capture area of the first camera. And, the image capture area of the fourth camera encompasses at least a part of the surroundings of the second vehicle side surface and at least a part of the surroundings of the vehicle front surface, if the at least one part of the surroundings of the vehicle front surface is not encompassed by the image capture area of the second camera, or encompasses at least a part of the surroundings of the vehicle rear surface, if the at least one part of the surroundings of the vehicle rear surface is not encompassed by the image capture area of the second camera.
In other words, it is then the case that in each case one camera is provided at all four vehicle edges, the image capture areas of which cameras encompass in each case at least a part of the surroundings of a vehicle side surface and at least a part of the surroundings of the vehicle front surface or of the vehicle rear surface. Thus, all-round monitoring of the vehicle surroundings is possible with only four cameras.
It is particularly preferable for the first camera and the second camera and/or the third camera and the fourth camera to be arranged in each case in the region of a highest point on the respectively associated edge. In other words, said cameras are then arranged at the “upper corners” of the vehicle as viewed in a vertical direction.
It is then possible in particular to capture aerial-view images, that is to say images with a view from above in the vertical direction. Alternatively, panorama perspectives are however also possible.
This is realized for example in that the first image capture area and the second image capture area and/or the third image capture area and the fourth image capture area have in each case a central axis which has a vertical component. Since the image capture areas of cameras normally widen in a funnel shape or cone shape proceeding from the lens, a central axis of said type of an image capture area is to be understood to mean the central axis of the corresponding funnel or cone. In other words, the central axes of the image capture areas then point downward.
The images of downwardly directed cameras require less transformation outlay in order to generate an aerial perspective, because they are already directed downward, and therefore less perspective adaptation is necessary.
In one refinement, an image evaluation device of the surroundings monitoring device, into which image evaluation device the images captured by the cameras are input, is designed such that
a) the images captured by the first camera device and/or by the second camera device and input into the image evaluation device are projected into the ground plane by way of a homographic transformation,
b) based on the images projected into the ground plane, at least one object possibly situated in the surroundings of the vehicle is identified by way of integrated object identification algorithms, and the position of said object relative to the vehicle is determined,
c) the images projected into the ground plane are amalgamated in a single representation, and said representation is generated as an aerial perspective,
d) the aerial perspective is input into the image display device in order to be displayed there.
Said measures, in combination with the arrangement according to the invention of the cameras, make it possible, in particular during the “image stitching”, that is to say during the amalgamation of images from several individual images to form one representation, for the position of the stitching axes to be able to be dynamically varied both in rotation and in translation in order to ensure a better representation of the identified object. More details in this regard will emerge from the following description of an exemplary embodiment.
It is also particularly preferable for a warning device to be provided which interacts with the image evaluation device such that a warning signal is generated if at least one identified object undershoots a predefined minimum distance to the respective vehicle surface or to the vehicle.
If, in the case of the above-described surroundings monitoring with at least two cameras or with four cameras, monitoring gaps arise in particular in the case of long vehicles, then on the first vehicle side surface and on the second vehicle side surface, there may additionally be arranged in each case at least one further camera which captures a surroundings area of the vehicle not captured by the image capture areas of the first camera and of the second camera and/or of the third camera and of the fourth camera.
The invention also relates to a method for operating a surroundings monitoring device of a vehicle, which surroundings monitoring device comprises at least one camera device, one image evaluation device and one image display device, comprising at least the following steps:
a) the camera device, which comprises at least two cameras which are arranged at vehicle edges of the vehicle and whose image capture areas encompass at least a part of the surroundings of a vehicle front surface or of a vehicle rear surface and at least a part of the surroundings of the two vehicle side surfaces, captures images of the surroundings of the vehicle and inputs signals representing said images into the image evaluation device,
b) the images captured by the camera device and input into the image evaluation device are projected into the ground plane by way of a homographic transformation,
c) based on the images projected into the ground plane, at least one object possibly situated in the surroundings of the vehicle is identified by way of integrated object identification algorithms, and the position of said object relative to the vehicle is determined,
d) the images projected into the ground plane are amalgamated in a single representation, and said representation is generated as an aerial perspective,
e) the aerial perspective is input into the image display device in order to be displayed there.
Other objects, advantages and novel features of the present invention will become apparent from the following detailed description of one or more preferred embodiments when considered in conjunction with the accompanying drawings.