The invention relates to a method and a device for representing objects of varying visibility surrounding a vehicle on the display of a display device.
From the state of the art, night vision systems are known where a picture of the surroundings of the vehicle is taken by a remote infrared camera and displayed on a display in the vehicle interior. As a result, obstacles or objects become visible which, in darkness, cannot be seen by the human eye. Object recognition systems are also known which carry out an automatic object recognition based on surroundings detected in the infrared spectrum. Thus, objects can also be automatically recognized which radiate or reflect no light or only little light in the visible spectrum but have a temperature that differs from their surroundings. It is also known that an automatic recognition of an object of a certain class in the surroundings of the vehicle can be emitted as a driver warning, for example, a pictogram, on a display in the interior of the vehicle. When, by use of a night vision system, for example, a pedestrian is recognized, a symbolic warning pictogram that corresponds to a pedestrian, will appear on the display. The obstacles recognized by the night vision system may also be highlighted directly in the picture of the night vision system on the display, for example, by way of a bordering or coloration of the obstacle.
German Patent document DE 10 2005 020 772 A1 describes a display system for representing the surroundings of a motor vehicle, the vehicle surroundings being detected by a remote infrared camera. In this case, detected objects that are classified as dangerous to the vehicle are represented in a highlighted manner in a head-up display of the motor vehicle.
In European Patent document EP 1 647 807 A1, a driver assistance system is described in the case of which, for the purpose of a driving assistance, a virtual vehicle driving ahead is shown to the driver in a display, such as a head-up display and, as a function of the detected driving situation, the driver is given driving instructions on the display.
German Patent document DE 10 2005 062 151 A1 describes a method and a device for assisting a vehicle driver when passing through narrow sections of the driving route. By use of at least one camera, image data are acquired of the traffic surroundings of the vehicle and, based on the dimensions of the vehicle, a representation of the future positions of the vehicle is generated, on which the image data of the camera are superimposed. The driver can recognize from the representation whether his vehicle can pass through a narrowing of the driving route situated in front of his vehicle.
From the state of the art, various systems for the recognition of road marks by use of devices installed in a vehicle are also known.
Furthermore, object recognition systems for recognizing obstacles on the road are known from the state of the art. In particular, a fusion of a 3D TOF sensor (TOF=Time of Flight) and a monocular camera is known from the “MIDIAS” Public Research Project. By means of the TOF sensor, objects are measured based on the transit times of the light that is emitted, reflected by the objects and subsequently received. The distances from the objects are thereby determined for different solid angles. A 3D map of the scene is created therefrom. In the case of the above-mentioned sensor fusion, the sensor data are executed by the TOF sensor and by the monocular camera by way of the same solid angles. The advantages of both sensor types can be combined in this case. In particular, the advantage of the high angular resolution of a monocular camera, which, however, supplies only minimal information concerning the object distance, is combined with the depth information of the 3D sensor determined concerning the same objects, which 3D sensor has an angular resolution that is relatively low per se.
FIG. 1 is a schematic view of the construction of a camera system based on sensor fusion. An object O to be recognized, which, for example, is represented as a person, is detected here by a monocular camera 101 as well as by a corresponding TOF sensor 102. In this case, the TOF sensor 102 contains corresponding means 103 for emitting infrared radiation as well as means for receiving the infrared radiation reflected at corresponding objects. The data of the camera and the sensor will then be merged, i.e., “fused”, and, for example, a map of the surroundings or a list of the recognized objects will be determined on this basis.
Known object recognition systems often generate a large number of warnings and other information to be observed by the user. Camera images, which show comprehensive parts of the surroundings may inundate the driver with information. In this case, it is difficult for an occupant of the vehicle to perceive a partly very large quantity of information and particularly to relate the instructions and warnings to the actual or real surroundings. Furthermore, by use of known systems, often a very large number of recognized objects are represented, whereby the perception of the occupant, and particularly of the driver, of the vehicle is overloaded.
When detecting the surroundings by using a night vision system, for example, based on a thermal camera in the wavelength range of 8-10 μm, the surroundings, including any obstacles, for example, pedestrians and animals, are represented on a display in the vehicle interior. In particular, warm objects (pedestrians, animals, certain parts of other vehicles) become very clearly visible. It is nevertheless usually very difficult for the driver to correlate the objects represented on the display with the real surroundings seen through the windshield glass. In particular, it is very difficult to recognize the spatial relationship between objects visible to the driver and objects shown on the display, which were recognized by the night vision system but cannot be seen by the driver's vision system. In addition, the image detected by a night vision system images only a certain aperture angle, which cannot always correspond to the variable human field of perception. It is therefore difficult for the driver to maintain an overview as to which area of the road is currently visible on the display in the vehicle interior, in order to thereby correlate the recognized obstacles to the real surroundings. FIG. 2 explains the just discussed problems again. This figure represents a top view as well as a lateral view of a vehicle 1, the aperture angle of a remote infrared camera of a night vision system in the forward region of the vehicle 1 being represented by a corresponding hatched region B. It is shown that the aperture angle differs clearly from the driver's field of vision. However, the installation of a remote infrared camera inside the vehicle occupant compartment is not possible because the windshield glass of the vehicle dampens the thermal radiation.
As explained above, a representation of the entire surroundings of the vehicle as a video image or as a large number of symbols, for example, on the head-up display, also requires high technical expenditures and is disadvantageous for human perception. Thus a conceivable representation of all objects recognizable by several types of sensors by use of one or more display devices would result in a representation that overloads human perception and is difficult to interpret.
It is therefore an object of the invention to create a representation of the surroundings of a vehicle for a user from the perspective of an occupant of the vehicle on a display device by which the user can better detect the objects in the surroundings of the vehicle.
This and other objects are achieved by a method and device for representing for a user objects of varying visibility surrounding a vehicle from the perspective of an occupant, particularly the driver of the vehicle, on the display of a display device. The surroundings of the vehicle are at least partially automatically recognized by one or more object recognition devices. For objects recognized by the object recognition device or devices, based on one or more criteria, it is determined whether the respective object is a first object that was classified to be visible to the occupant, or a second object that was classified to be invisible to the occupant. For a number of recognized objects including at least one first object and at least one second object, the respective positions of the objects are determined for the display, in the case of which the geometrical relationships between the number of objects correspond essentially to the real geometrical relationships from the perspective of the occupant of the vehicle. The number of objects are represented in the determined positions on the display.
The method according to the invention is used for representing objects of varying visibility surrounding a vehicle for a user from the perspective of an occupant, particularly the driver of the vehicle, on the display of a display device, wherein the surroundings of the vehicle are at least partially automatically recognized by use of one or more object recognition devices.
The term “vehicle” can apply particularly to a motor vehicle, an aircraft, a watercraft or an amphibious vehicle. In addition, a vehicle in the sense of the invention may also be an autonomous utility or roving vehicle and/or a mobile robot. In particular, the vehicle can carry out at least partially automatic maneuvers, for example, park-in maneuvers.
In a preferred variant, the user of the vehicle corresponds to the occupant of the vehicle itself. In this variant, the term “user” used in the following can be equaled with the occupant of the vehicle, the occupant being especially the driver of the vehicle. However, if required, there is also the possibility that the user of the vehicle is a person who controls the vehicle from the outside, for example, in a wireless manner, and/or receives information from the vehicle in a wireless manner. In this sense, the user of the vehicle may, for example, be a second driver of the vehicle who monitors the implementation of a maneuver. In this case, if required, there may be no occupant at all in the vehicle. Correspondingly, the perspective of the occupant will then be the perspective of a virtual occupant in a position in the vehicle interior. If the user is not identical with the occupant of the vehicle, the user, as required, can select the position of the occupant from a number of positions. The assumed position of the virtual occupant may be a position that is selected or defined by the user and which is within the geometrical boundaries of the vehicle, particularly within its interior.
According to the invention, the method determines for objects which were recognized by the object recognition device or devices, based on one or more criteria, whether the respective object is a first object that was classified to be visible to the occupant, or a second object that was classified to be invisible to the occupant. When the user is not the occupant, the visibility classification takes place based on a virtual occupant assumed to be in the corresponding occupant position. For a number of recognized objects comprising at least one first object and at least one second object, the respective positions of the objects are determined for the display, wherein, for the respective positions, the geometrical relationships between the number of objects correspond essentially to the real geometrical relationships from the perspective of the occupant of the vehicle. In particular, the relative geometrical relationships between at least a total of three represented objects correspond essentially to the real geometrical relationships between the corresponding objects in the surroundings of the vehicle from the occupant's perspective.
In this case, for example, the distance relationships between the positions of the objects on the representation can correspond to the distance relationships between the corresponding objects on the road, as the occupant would see these from his perspective, especially his eye position. The number of objects will then be represented on the display in the determined positions.
An object recognition device in the sense of the invention may especially be a device that is known per se and permits at least the recognition of characteristics, particularly of edges and/or contours, of an object and/or their mutual arrangement. A classification of the objects, for example, into a class of objects, is not absolutely necessary.
According to the invention, by the representation of at least one first visible object and one second invisible object, the user of the vehicle learns the relative position of these objects in correspondence with the real geometrical circumstances for the occupant's perspective, so that the user of the vehicle obtains a clear idea of the actual position of the invisible objects relative to the visible objects.
The invention therefore provides a determination of such a respective representation position of the displayed first and second objects respectively, which essentially corresponds to the relationship of these positions from the position of an occupant, particularly from his viewing angle. Thus, objects which were detected from a different perspective, for example, from an installation position of the object recognition device in the vehicle, are represented on the display of the display device in the geometrical relationships that are correct for the occupant's position.
In a preferred embodiment, the positions of the first and second objects respectively are first determined relative to the vehicle, and are subsequently transformed to the representation on the display of the display device by a geometrical transformation. In this case, geometrical relationships especially are the proportions of distances between the displayed first and second objects respectively and/or the angular relationships between these objects. The detection and recognition of corresponding first and second objects respectively takes place, for example, by use of different object recognition devices provided at the vehicle. These object recognition devices may, for example, be further developed as a close infrared or a remote infrared, for example, thermal imaging camera, or as devices that are based on a distance measurement to the different parts of the surroundings. Likewise, the object recognition devices may also be cameras for visible spectral fractions of the light.
According to the invention, the term “object” also applies to a part of the surroundings, for example, a part of the road having a road marking or road boundary or a part of a building. Invisible objects especially are those objects which, for example, because of their low reflectance in the spectrum visible to humans and/or because of the light and/or vision relationships for the occupants of the vehicle or, for example, because of the coloration of the object, are not visible or may most probably not be visible. In particular, invisible objects are those objects which, despite a viewing direction that would allow their perception per se, will most probably be overlooked.
In a particularly preferred embodiment of the invention, the classification of objects as first visible objects and second invisible objects takes place on the basis of a visibility proportion. In this case, based on the criterion or criteria, an object is classified as a first object if, with respect to the object, a first threshold of a visibility proportion is exceeded for the occupant of the vehicle. An object is classified as a second object if, with respect to the object, there is a falling below a second threshold of a visibility proportion for the occupant of the vehicle. The second threshold corresponds to the first threshold or is lower than the first threshold. As a result of the corresponding spaced selection of the first and second thresholds, in particular, an advantage is achieved in that the generated display still offers a good orientation to the occupant but is not cluttered with unnecessarily displayed objects or parts of the surroundings.
The visibility proportion of an object, in particular, can be determined as a function of the spectral characteristics of the light emitted by it. Thus, for example, objects having an unambiguously recognizable radiation in the remote infrared spectrum may not have sufficient radiation in the visible spectrum and vice-versa.
The classification of recognized objects into visible and invisible objects may especially take place based on the determination of a visibility proportion, as disclosed in BMW's earlier German patent application 10 2008 051 593.0, the entire content of which is hereby incorporated by reference. The principles of the method described in this earlier patent application may also be used in the invention described here for identifying a visible and invisible object respectively. However, in contrast to the earlier patent application, in the invention described here, the visibility of the recognized objects is determined from the position of the occupant, particularly of the driver of the vehicle. In contrast, in the earlier patent application, the visibility of the vehicle itself is determined from different solid angles or observation positions.
As mentioned above, for the differentiation between “visible” and “invisible”, particularly a suitable threshold is defined for the determined visibility proportion, in which case, when the threshold is exceeded, the object is classified as visible to the person, and when there is a falling below the threshold, the object is classified as invisible to the person. The visibility proportion can be expressed, for example, in the form of the probability with which the recognized object is seen or overlooked particularly by a statistical, human or animal vision system. Analogous to the above-mentioned earlier patent application, for determining the visibility proportion, a first light distribution comprising a brightness distribution and/or spectral distribution of luminous surfaces of the respective recognized objects and, if applicable, in the surroundings of the detected objects, can be determined. This brightness distribution and/or spectral distribution is determined especially by use of corresponding sensing devices in the vehicle, in which case these sensing devices may also be part of the object recognition device or devices used according to the invention. For defining a suitable visibility proportion for the position of the occupant or driver of the vehicle, the first light distribution is preferably transformed into the perspective of the position of the occupant or driver of the vehicle, whereby a second light distribution is obtained. Based on this second light distribution, a visibility proportion can then be defined for the respective object.
The term “luminous surface” may be applied to a luminescent surface as well as to a reflective, refractive or fluorescent surface. A luminous surface may also be a surface that shines itself, as well as, a surface that can also reflect or refract the light, such as the windshield of a vehicle. A luminous surface may also be a vehicle light. The classification into luminous surfaces may, for example, be made according to geometrical aspects. Thus, for example, objects which are situated close together, or their parts, may form a luminous surface, while an object arranged farther away or a part thereof may be classified as another luminous surface. In this case, it is particularly advantageous to consider a source quantity of the first light distribution with similar characteristics a luminous surface and/or to process it has such. The luminous surfaces may possibly also be very small infinitesimal surfaces. The luminous surfaces may have arbitrary shapes; they may, in particular, also be curved.
By way of the first light distribution, a spatial distribution of the luminous surfaces is therefore described. The light distribution may be determined and/or processed as an angle, spatial angle or angular relationship between the luminous surfaces.
The above-mentioned visibility proportion represents especially the perceptibility of the object, its outer boundaries, and/or the spatial alignment of the object for the person's visual system. Furthermore, the visibility proportion can also take into account the distinguishability of the respective object from different or additional objects. In particular, an object is assumed to be visible when a sufficiently large fraction of its parts, which may also be objects, especially of edges and/or structures, are visible, and/or when parts of the object are visible at its spatial boundaries. An object will be assumed to be invisible in its entirety, particularly when these criteria have not been met.
The determined visibility proportion is a function of variables, especially of current contrast relationships as well as of the relative movement of the respective object with respect to customary light sources or other objects. Thus, for example, the probability that an object is seen by the occupant generally increases when—viewed from the occupant's position—the object is moving relative to the background.
In a preferred variant of the method for determining the visibility, the determined first light distribution is present in such a form that it contains an angular dependence of the parameters of the light distribution. This can be described, for example, in the form of a function of the dependence of the individual parameters of the light distribution on the direction and, if applicable, also on the distance, which function is described by supporting points. The first light distribution is advantageously present in a vector-based format, a vector in this format indicating the direction in which the corresponding luminous surface is situated and, as an attribute, containing the pertaining light parameters and/or the corresponding radiation characteristics of the respective luminous surface.
The transformation of the first light distribution carried out for determining the visibility proportion is preferably carried out based on a transformation of coordinates, which transforms the position and/or the ray angles of the luminous surfaces into the perspective of the occupant of the vehicle. This transformation of the first light distribution can be carried out, for example, based on a simulation of the light propagation in the system from the respective recognized object and at least parts of the surroundings of the object. In a computer-assisted manner, a model of the luminous surfaces with their radiation characteristics is spatially established from the determined first light distribution, and this model can then, again in a computer-assisted manner, be transformed by use of known transformations into the perspective of the occupant of the vehicle.
In a preferred embodiment of the method for determining the visibility proportion, one or more contrast relationships between luminous surfaces, particularly between the respective recognized object and the surroundings of the recognized object, within the second light distribution are determined. The visibility proportion is a function of the contrast relationship or relationships. In this case, the contrast relationship is a measurement for the difference between the brightness and/or spectral distributions of different luminous surfaces and/or within a luminous surface. The difference can, for example, also be expressed in the form of a gradient.
As required, the contrast relationship may also be determined by way of a function which describes the respective spatial courses of the brightness or spectral distribution of the individual luminous surfaces, as required, depending on the solid angle of the position of the occupant of the vehicle or the distance from the position of the occupant of the vehicle. The contrast relationship may therefore also represent the difference between the mean brightnesses or mean spectral distributions of two luminous surfaces. The mean of the brightness or spectral distribution is taken by way of the dimension of the respective luminous surface. In particular, a contrast relationship between the respective recognized object and the surroundings of the object is determined as the contrast relationship. In particular, the contrast relationship can be determined between the luminous surfaces of the respective object, which characterize the geometrical borders or the dimensions of the object, and the luminous surfaces of the surroundings, particularly the luminous surfaces not hidden by the respective object. The determination of the contrast relationship between the luminous surfaces, which with respect to the position of the occupant of the vehicle form the geometrical borders of the respective object, and the luminous surfaces from the surroundings, which are visible in an essentially adjacent solid angle with respect to the geometrical borders of the recognized object, is particularly advantageous. The visibility proportion can, in addition, also be determined from local contrast relationships of partial areas within the second light distribution.
According to the invention, the differentiation between the first visible and second invisible objects preferably takes place by use of the object recognition device or devices which, in the method according to the invention, are used for the detection or recognition of objects in the surroundings of the vehicle. The invention can therefore be implemented in a particularly cost-effective manner by using sensors or computing resources that are present anyhow. For example, the automatic differentiation between the first and second objects and/or the selection of the first and second objects to be represented together with further methods, for example, for the recognition of objects, can be implemented in the computing unit of an object recognition system, particularly by the characteristics of objects already present in this system and determined within the scope of the object recognition method.
In a preferred embodiment of the method according to the invention, the differentiation between the first visible and the second invisible objects takes place by use of at least two type of sensors and/or object recognition devices. For example, the surroundings of the vehicle can be detected by use of a close or remote infrared image sensor for the detection of objects which cannot be visible to the occupant, for example, at night. The objects which may be visible to the occupant can be detected by use of an image sensor in the visible spectrum or a corresponding camera. For objects which are detected by such an image sensor, a checking of the visibility proportion can take place according to the above-mentioned principles. By an automatic comparison of detected objects that were detected by way of at least two types of sensors, it can be determined which objects situated in the surroundings of the vehicle are visible or invisible to the driver under the given light and vision conditions. Advantageously, laser scanners, laser distance sensors or TOF sensors can be used as the sensor type. In this case, it is particularly advantageous that these sensors do not depend on the radiation or reflection of the surrounding light. Two types of sensors may, for example, also be used together in a sensor housing or on a semiconductor crystal, for example, as pixels with a different spectral sensitivity, for example, for the visible spectrum and the remote infrared spectrum.
In a further embodiment of the method according to the invention, in which the user corresponds to the occupant, the viewing direction of the occupant of the vehicle is detected, in which case, for the definition of the criterion or criteria, those objects are classified as second objects which, on the basis of the detected viewing direction, particularly of the history of the viewing direction, may most probably be overlooked by the occupants. The occupant's viewing direction can be detected by arbitrary eye tracking methods which are known from the state of the art and which track the course of the person's eye movement. Conclusions can be drawn therefrom concerning the objects which are more or less easily visible to the driver.
In addition to the analysis of the basic viewing direction, the invention also takes into account so-called saccadic eye movements, which are sufficiently known to a person familiar with the art. These usually very small and quick eye movements, which take place within one viewing direction, play a role in the case of the actual visibility or invisibility of objects to the person. The taking into account of the viewing direction and the saccadic eye movements also contains the taking into account of the history of these parameters. It can thereby be determined, for example, that an object is a second object if the occupant has looked at this object immediately after observing another section in space with a clearly different illumination and/or distance and the time required for an adaptation or accommodation of the human visual system has not yet elapsed.
In a further development of the method according to the invention, the position of the display of the display device, which can be perceived by the user of the vehicle, is situated outside the vehicle interior. In this case, the representation can be designed, for example, by means of a head-up display or a further development of the latter. As an alternative, the display can take place, for example, by means of a projection, particularly a holographic projection, inside or outside the vehicle interior, or by means of a 3D display.
In a further variant of the method according to the invention, the first and second objects represented on the display can be displayed at least partly as symbols. In particular, the at least partly symbolic representation can be implemented such that different parts of the image detected by a camera device are subjected to different image processing steps. Thus, particularly those parts of the image that contain visible and/or invisible objects or their parts, for example, edges or marginal areas, can be subjected to different image processing operators. A symbolic representation can also be generated by a faded-in graphic or by a graphic faded over the image and/or by a corresponding symbol.
In an additional further development of the method according to the invention, road boundaries and/or road markings recognized by at least one object recognition device, up to a predefined distance from the vehicle, are classified as first objects. Advantageously, the road boundaries and/or road markings at a defined distance from the vehicle, depending on the prevailing conditions (day, night, twilight, rain, fog), can be assumed to be visible and can be displayed in the display according to the invention. In this manner, a classification of the road boundaries and road markings respectively as visible or invisible objects is carried out particularly easily.
In a further embodiment of the method according to the invention, the classification of the objects recognized as first or second objects by the object recognition device or devices takes place as a function of light and/or visibility relationships detected by at least one detection device and/or their history, particularly while taking into account psycho-visual properties of the human vision system. When detecting the light and/or visibility relationships, in particular psycho-visual contrast relationships are taken into account. Psycho-visual contrast relationships and their influencing variables per se are known to a person skilled in the art. A psycho-visual contrast relationship is based on the known specific properties of the person's vision system, that is, on the ability of this system to perceive differences between differently shining parts in space, coloration courses, intensity courses, textures as well as edges of an object, among other things, by means of the spatial and time-related gradients of different light parameters. The psycho-visual contrast relationships preferably additionally include the adaptation properties of a person's vision system.
According to the invention, the determination of the visibility proportion and/or of variables for the determination of the visibility of objects can take place by way of a simulation of the vehicle occupant's visual system. In this case, the surroundings and/or history of the light relationships can be taken into account to which an occupant of the vehicle is exposed and/or was exposed in the recent past. In a preferred variant, the psycho-visual contrast relationship is determined based on a simulation of the visual system of a, for example, statistical driver of the vehicle. In this case, a number of influencing variables or events, which act upon the driver, can be taken into account. These include, for example, the influence of being “blinded” by a traffic participant, the influence of the interior light at night, the reflective effect of the windshield in front of his eyes and the like.
From the total number of first objects, a partial quantity of first objects is advantageously selected which or whose position is/are displayed in the display device. Preferably, a partial quantity of first objects is selected that provides the user with information concerning the position of the at least one second object. In an embodiment of the method according to the invention, the selection of first objects to be represented depends on whether at least one second object in the same area of the surroundings was selected to be displayed. In particular, only those first visible objects are represented on the display, which surround the second invisible objects represented on the display at least from two or three spatial sides. A frame is thereby formed of the first objects, which surrounds the second objects displayed in the representation. The user of the vehicle thereby obtains additional assistance for perceiving the position of the second invisible objects, specifically without an unnecessary constant representation of many objects visible to the driver anyhow. For example, the appearance of an invisible object in the surroundings of the vehicle and in the representation according to the invention has the result that several objects, which are visible per se and previously had not been displayed in the representation, are now represented together with the invisible object.
In another embodiment of the invention, parts, that are visible per se, of an object that is represented in a determined position on the display are not represented, and/or are suppressed in the representation on the display when the visibility proportion of the object falls below the second threshold. In this manner, at least those parts of corresponding second objects are not displayed that are visible per se to the vehicle occupant.
In a further embodiment of the method according to the invention, a virtual level of the road comprising the road boundaries and/or road markings recognized by at least one object recognition device is generated on the display. A representation is thereby generated by which the driver can very rapidly detect the position of recognized objects with respect to the road because the objects are represented in the correct positions relative to the virtual road level. By the representation of the virtual road level, which comprises the recognized road markings and/road boundaries, furthermore, a very good orientation aid is provided for the driver concerning the displayed recognized objects. In particular, the position of an object relative to the road markings or road boundaries can be represented on a relatively small area of the display of the display device, such as a head-up display, in which case the driver can nevertheless estimate precisely at which points of the road, for example, an obstacle is situated. A high-expenditure expansion of the head-up display to large parts of the windshield or the use of a so-called contact-analogous head-up display is therefore not necessary. In particular, the driver very easily recognizes in which lane as well as in which position within the lane a corresponding visible or invisible object is located.
In a further embodiment of the method according to the invention, a selection of the first and second objects displayed on the display takes place as a function of the odometric data of the vehicle and/or of an automatically analyzed traffic situation. For example, the length of a visible road section represented on the display or the width of the road section, as, for example, the number of represented lanes, may depend on the odometric data of the vehicle and/or on the automatically recognized traffic situation.
Depending on the automatically analyzed traffic situation (for example, parking and maneuvering, speed restricted zone, expressways), a respective relevant area is represented on the display. The area to be represented can especially depend on the estimated position or range of the vehicle in the next seconds. This is influenced by the consideration that certain areas of the road, for example, directly adjacent to the vehicle, are not relevant in many traffic situations. In contrast, areas situated farther away will be more relevant. At a maneuvering speed, the road section of a length of a few meters directly adjoining the vehicle will be relevant, whereas, at an expressway speed, the representation starts only after 20 meters and includes the next 30 meters. The road section represented on the display may also be selected as a function of the data of a navigation system, for example, as a function of the further course of the road and/or the set destination.
In a further embodiment of the method according to the invention, the respective positions of the first and second objects for the display are determined by way of depth information concerning the real position of the first and second objects relative to the vehicle. The depth information is determined by use of at least one object recognition device. In this manner, a simple trigonometric context is obtained which can be determined in a resource-saving manner. In this case, the distance to the corresponding object can be determined by methods known per se. The implementation of the method by means of the above-mentioned MIDIAS camera system is particularly advantageous, wherein the depth information of the TOF sensor relative to the concerned solid angle is used for determining the object distance.
In a further embodiment of the invention, for generating a depth effect, the display has several display levels with different positions perceived by the user with respect to a predetermined viewing direction of the user, particularly the driver, of the vehicle. Preferably, parts of the representation, particularly different objects, can be represented on different display levels, particularly on such display levels that are perceived to be situated at different distances from the user and/or as being sloped. The presence of a completely 3D-enabled display device is therefore not necessary in order to represent to the user, for example, intuitively and clearly, different slopes of a virtual road level with respect to the horizon or to show that a represented object is farther away from the vehicle than another represented object. The invention can therefore be implemented, for example, by way of a head-up display or another display device which permits, for example, only 3 to 12 different depth representations.
In a further variant of the method according to the invention, a selection of the displayed first and/or second objects takes place based on a criterion according to which objects that have an increased risk of colliding with the vehicle are preferred as objects to be displayed. Thus, the objects to be represented in the display, particularly the invisible objects, can be selected as a function of their traffic relevance, particularly of the collision probability with the vehicle. In this case, the collision probability can be determined by use of methods known per se for the determination of possible collisions.
In a further development of the method according to the invention, the first and second objects represented on the display are connected by way of additional graphic elements. These artificially generated graphic elements particularly have the purpose of representing the special relationship between visible and invisible objects. The graphic elements may particularly be designed as subsidiary lines, raster graphics, dots, arrows, marked surfaces or distance indications.
Advantageously, in addition to the representation of an object, particularly of a second object, which is designed, for example, as a symbol or as a highlighted object, an enlarged representation of this object, for example, can be represented, for example, in another area of the display device or on a different display device.
In this manner, a relatively large view of the object as well as the position of the object can be represented simultaneously on a display surface that is relatively small as a whole, or within a small viewing angle. In a preferred further development of the method, the enlarged object view can be represented in a different perspective than the virtual road level shown in an embodiment of the invention, for example, in the untransformed perspective and/or with a changed appearance, for example, with highlighted edges and contours.
Advantageously, the method can be further developed such that additional graphic elements are displayed which display the relationship of the enlarged representation of the object to the position of the object on the display. The user is thereby informed of the association of the enlarged representation of the object with the object representation or the representation of the position of the object. The graphic elements can connect the representation of the object, which is further developed, for example, as a symbolic representation, with the enlarged representation or indicate the relationship between the two representations by the direction of a line which is essentially oriented from one representation to the other representation.
Advantageously, the part of the surroundings of the vehicle represented by the display device is a function of the presence of at least one second object, particularly of at least one traffic-relevant second object. Thus, the greater part of the available surface of the display can be utilized for representing a part of the surroundings in which a traffic-relevant object is recognized that is invisible or not directly visible to the occupant. This can contribute to a better concentration of the user on the second object and its position as well as to an improved utilization of the display surface.
In an embodiment of the invention, the display device is part of an external device outside the vehicle, for example, of a mobile telephone, in which case information is transmitted from the vehicle to the external device, particularly in a wireless manner. In this case, the user can preferably select the position of an occupant from whose perspective the surroundings of the vehicle are to be represented, by way of the external device, from at least two positions.
In addition to the above-described method, the invention also relates to a display device for representing objects of varying visibility surrounding the vehicle to a user from the perspective of an occupant, particularly of the driver, of the vehicle on the display of a display device, in which case the surroundings of the vehicle were at least partially recognized automatically by means of one or more object recognition devices. The display device includes a means by which, for objects that were recognized by the object recognition device or devices, it can be determined based on one or more criteria whether the respective object is a first object, which is classified as visible to the occupant, or a second object, which is classified as invisible to the occupant. The display device also includes a means by which, for a number of recognized objects comprising at least one first object and at least one second object, the respective positions of the objects can be determined for the display, in the case of which the geometrical relationships between the number of objects correspond essentially to the real geometrical relationships for the perspective of the occupant of the vehicle. The display device is further developed such that the number of objects are displayed in the determined positions on the display. By way of this display device, preferably each variant of the above-described method according to the invention can be implemented. The display device can be installed or integrated in one or more control devices or units of the vehicle.
In addition to the above-described display device, the invention also relates to a vehicle comprising such a display device.
Other objects, advantages and novel features of the present invention will become apparent from the following detailed description of one or more preferred embodiments when considered in conjunction with the accompanying drawings.