The invention relates to a method and a device for visualizing the surroundings of a vehicle. The device fuses a visual image, which contains the digital data of the surroundings and shows the visually perceptible objects, and an infrared image, which contains digital data of the surroundings and which shows the infrared radiation, emitted by the visually perceptible and/or other objects, into a target image. The target image can be represented in an image display unit, in order to simplify the allocation of the infrared radiation-emitting objects in the recorded environment.
Devices for image fusion superpose the images of at least two cameras, which record the actual surroundings of a vehicle in a number of different spectral ranges. The spectral ranges may include, for example, visually perceptible light and infrared radiation. A target image, derived from image fusion, makes it possible for the driver of a vehicle to interpret more easily, and better, the information about the environment of the vehicle, wherein the information is made available in an image display unit.
For this reason, a camera system exhibits at least two cameras with largely parallel optical axes, which are offset spatially in relation to each other. Due to the offset mounting of the cameras (i.e., the offset optical axes), the images supplied by the cameras cannot be aligned in relation to each other over a wide range of distances in a manner that is totally faithful to the object. The object fidelity describes the radiation, which is reflected and/or emitted by one and the same object in the environment of a moving vehicle, in the target image can be clearly allocated to precisely this object by the driver.
Orientation errors and/or the quality of the object fidelity occur as a function of the distance of the cameras, and as a function of the distance between the cameras, and the recorded object. By calibrating the camera system it is possible to image quite well objects in either the close range (this corresponds to a driving situation that is typical for a vehicle traveling in the city and exhibits a distance ranging from approximately 15 to 75 m). However, in the far range, the consequence is poor image fidelity. The same applies if the camera system is optimized for objects in the far range (this corresponds, for example, to a cross country or freeway trip exhibiting a distance ranging from 30 to 150, or 50 to 250 m, respectively). Thus, the result is an orientation error in the close range.
Devices and methods for fusing images are known, for example, from German patent documents DE 102 27 171 A1 and DE 103 04 703 A1 (having U.S. counterpart U.S. Pat. No. 7,199,366 B2, the specification of which is expressly incorporated by reference herein) of the present assignee. The devices described therein includes a camera system that has a visual camera, which provides a visual image, containing the digital data of the environment, and an infrared camera, which provides an infrared image, containing the digital data of the environment. The visual image shows the visually perceptible objects. The infrared image shows the infrared radiation, emitted by the visibly perceptible and/or far objects. An image processing unit fuses the image of the visual image and the infrared image. The fused target image is displayed in an image display unit. The image fusing process includes a complete or partial superpositioning (for example, pixel-by-pixel or pixel region-by-pixel region) of the visual image and the infrared image. According to this method, in particular simultaneous and locally identical pairs of images are superposed. Therefore, brightness values and/or color values of the pixels or pixel regions can be superposed and/or averaged. For further optimization, the superpositioning of the images can be carried out by using weighting factors. Optionally, the consideration of the brightness and/or the visual conditions of the vehicle is described. Furthermore, the different weighting of the pixels or pixel regions is proposed. The drawback with these prior art methods is the large amount of computing that is involved in displaying target images, which are faithful to the object, over a wide distance range.
Therefore, there is needed an improved method and an improved device for visualizing the surroundings of a vehicle, which makes possible a reliable interpretation of the contents of a target image, generated from fused image.
The method according to the present invention for visualizing the surroundings of a vehicle, especially in the dark, provides a visual image, containing the digital data of the surroundings. This visual image shows the visually perceptible objects. Furthermore, the method provides an infrared image, which contains the digital data of the surroundings and which shows the infrared radiation, emitted by the visually perceptible and/or other objects. The images are fused: the visual image and the infrared image are fused to form a target image, which can be represented in an image display unit, in order to simplify the allocation of the infrared radiation-emitting objects in the recorded surroundings. The fusion of the visual image and/or the infrared image is interrupted as a function of one environment parameter in order to represent only one of the two images, or none of the images, as the target image.
In order to avoid an orientation error when representing a target image, the invention proposes that the image fusion is interrupted as a function of one environment parameter. Even though owing to the interruption of the image fusion in the event of an environment parameter, information is removed from the target image that is to be represented, it is easier for the user of a vehicle to interpret the target image represented in the image display unit. Therefore, the user is not diverted as much by the events taking place on the road in the vicinity of the vehicle.
According to one embodiment, all digital data of the surroundings of the respective image are blanked out in a step wherein the fusing of the visual image and/or the infrared image is interrupted in order to avoid blurring and/or double images. Expressed differently, this means that either the visual image or the infrared image, or even both images, are totally and not just partially blanked out. The latter means that no target image at all is represented in the image display unit.
The environment parameter is determined, according to another embodiment of the method according to the invention, from at least one driving dynamic variable, which is sensed by use of a sensor, and/or at least one other parameter.
In one embodiment, the speed of the vehicle, in particular the undershooting or overshooting of a predefined speed, is processed as the driving dynamic variable. In another embodiment, the distance between the vehicle and an object, recorded in the viewing angle of the camera, in particular the undershooting or overshooting of a predefined distance, is processed as the driving dynamic variable. Preferably, both parameters are considered in one combination.
Another embodiment provides that the current position of the vehicle is processed as the driving dynamic variable. In yet another embodiment, the topology and/or the current weather conditions are processed as the additional parameters. For example, when driving through a tunnel, the image fusion can be deactivated as a function of the topology and/or the momentary position of the vehicle, which can be derived, for example, from the GPS data that are made available to a navigation system. In such a driving situation, an infrared camera is hardly in a position to present in a meaningful way the information that is relevant to the user of the vehicle. Therefore, a superpositioning of the infrared image with the visual image would not result in the information, supplied to the user of the vehicle, being enhanced, so that it is advantageous, for example, to block off the infrared image from the target image. The same applies also in the event of poor weather conditions, such as rain, during which the infrared camera cannot provide an adequately good resolution of the environment.
Another embodiment provides that a parameter that can be chosen by the user of the vehicle is processed, as another parameter, by the image processing unit. In many situations it may be desirable for the user of the vehicle to change the representation of a target image, generated from an image fusion, and to represent selectively just the infrared image or the visual image. Therefore, the other parameter would correspond to a deactivation and/or activation of the image fusion that the user of the vehicle has actively initiated.
Furthermore, another embodiment can provide that the entropy values of the visual image and of the infrared image are determined; and the entropy values are compared with a predefined entropy value; and that, based on the results of the comparison, it is decided whether the visual image and/or the infrared image or both will be blanked out. The entropy of an image of the environment contains information about the significance of one of the images. If, for example, an image is in saturation (i.e., said image is overexcited), no information is delivered that the user of the vehicle can evaluate in any meaningful way. Upon detecting such a situation, the image that exhibits, for example, too low a contrast can be blanked out in order to interpret a situation.
The decision whether the image fusion of the visual image and/or the infrared image shall take place or be suppressed can be made as a function of the occurrence of one or more arbitrary aforementioned environment parameters. The decision to stop the image fusion can be made as a function, in particular, of the simultaneous occurrence of several parameters. It is also contemplated to interrupt the image fusion in the event of environment parameters that occur chronologically in succession.
In order to prevent the target image, represented in the image display unit, from alternating at short intervals between a fused and a non-fused representation (a feature that could perhaps confuse the user of the vehicle), another advantageous embodiment provides that the image fusion ceases in consideration of a time hysteresis.
The device according to the invention has the same advantages as described above in conjunction with the method according to the invention.
A device according to the invention for visualizing the surroundings of a vehicle, in particular in the dark, includes a camera system, which contains a visual camera, which provides a visual image, containing digital data of the surroundings, and an infrared camera, which provides an infrared image, containing the digital data of the surroundings. The visual image shows the visually perceptible objects; and the infrared image shows the infrared radiation, emitted by the visually perceptible and/or other objects. Furthermore, an image processing unit for processing the visual image and the infrared image is provided. The image processing unit is designed and/or equipped to fuse the visual image and the infrared image. An image display unit serves to display the image of the environment that is generated by the image processing unit. According to the invention, the image processing unit is designed to stop the image fusion as a function of an environment parameter.
One particular embodiment of the invention provides at least one sensor, coupled to the image processing unit, for determining a driving dynamic variable.
According to another embodiment, the image processing unit can be fed another parameter, which is determined either by use of a sensor or is supplied by an external means. The additional parameter could be transmitted to the image processing unit using, for example, mobile radio technology or via GPS.
Expediently, the image processing unit is designed to determine the environment parameter from the driving dynamic variable and/or an additional parameter.
In another embodiment, the camera system is calibrated with respect to a fixed distance range. The fixed distance range can be by choice close up or far away. Close range is defined here as a situation that corresponds to urban driving, where distances ranging from 15 to 75 m are significant. Far range is defined in this application as a driving situation that is typical for a vehicle traveling on rural roads, in particular in a distance range from approximately 30 to 150 m, or for a driving situation that is typical for a vehicle traveling on a freeway and that covers, in particular, a distance range from approximately 50 to 250 m. In principle, the camera system can be calibrated with respect to any distance. If the camera system is calibrated for the far range, the results are orientation errors in the close range owing to the offset optical axes of the cameras, a state that may cause irritations. A significant parameter that has an impact on the image fusion is the distance between the vehicle and a leading object, which is located in the close range of the vehicle, and/or the undershooting of a fixed speed. The camera system may be calibrated in an analogous manner for the close range so that imaging errors in the far range of the vehicle result in an analogous manner. Overshooting a fixed distance and/or a fixed speed could result in an interruption in the image fusion.
Other objects, advantages and novel features of the present invention will become apparent from the following detailed description of the invention when considered in conjunction with the accompanying drawings.