Field of the Invention
The invention relates to a device and a method for detecting objects in the surroundings of a vehicle.
Description of the Background Art
For modern support functions and systems in motor vehicles, such as automatic parking systems for example, a knowledge of the vehicle's environment is required. Automatic parking systems are capable of assisting a driver in parking the vehicle or of carrying out the entire parking process without driver intervention. In order to be able to execute such a fully automatic process, it is necessary for the vehicle to acquire measurement results concerning objects in the environment by means of contactless sensors and thus to be able to reliably identify regions that are occupied by objects and are not available for a parking process, and which regions or areas are clear of objects and can be used for carrying out the parking process. It is customary to mark regions or points in the vehicle's environment where the presence of an object is detected using the measurement results as occupied in a map of the environment. The occupancy information can be specified in the form of a probability that the region of the environment corresponding to the map region is occupied by an object, for example.
Contactless measurement methods frequently operate using the a pulse echo measurement method, in which either a sound pulse or a pulse of electromagnetic radiation is transmitted and is then reflected at the surface of the objects present in the environment. These reflected echo pulses are sensed in a time-resolved manner by the measurement sensors. If electromagnetic radiation is used, depending on the frequency range one speaks either of a radar measurement (radar: radio detection and ranging) or a lidar measurement (lidar: light detection and ranging). With radar and lidar measurements, it is possible to transmit the electromagnetic radiation aimed in a tightly delimited angular region. Using the transit time, which is to say the time between the transmission of the pulse and the reception of the echo pulse, the distance to the object that has reflected the transmitted pulse can be inferred when the propagation velocity of the transmitted pulse or of the echo pulse in the environment is known. In the case of radar and lidar measurements, the direction of emission is generally changed for successive measurements, and thus the environment is progressively scanned using the echo pulses emitted in different solid angle regions and received from these solid angle regions.
When using sound sensors, which emit sound signals in the ultrasonic frequency range, sound is generally radiated in a relatively large angle region and is also received by the sensors from a relatively large angle region. Consequently, based on a measurement using the time period between the transmission of the pulse and the arrival of the first echo pulse it is only possible to establish that an object is present on a section of an ellipse, where the ellipse is characterized in that the transmitter and the receiver are located at the foci of the ellipse. The size of the ellipse is determined by the transit time of the echo pulse. If a sensor that serves as both transmitter and receiver is used, the elliptic curve becomes a circular arc. In order to be able to ascertain the position of the object more precisely, the evaluation of multiple measurements for different detection geometries is necessary. A detection geometry, also referred to as a measurement geometry or transmitter/receiver geometry, is defined by the location of the transmitter and the location of the receiver and, if applicable, an emission geometry or receiving geometry insofar as these are alterable, relative to a coordinate system that is coupled to the environment in a fixed manner. Different measurement geometries are thus obtained, for example, by the means that the transmit signal from a sensor is measured with receiving sensors located at various locations or by the means that the transmitting sensor and/or the receiving sensor is positioned at a different place with respect to the stationary coordinate system.
The superposition of multiple different measurement results in the image space, which is to say the superposition of the elliptical or circular-segment-shaped curves on at least one point of which a localized object is located based on the relevant measurements for the purpose of exact position determination, is called lateration. The superposition of three measurement results is necessary in order to be able to uniquely localize a point emitter, which is to say an object that is not spatially extended. Consequently, position determination is also called trilateration. Since objects in real environments of vehicles are generally extended objects, a correct sensing of the environment is resource-intensive and difficult, and could be significantly improved if it were possible to determine, using a small number of measurement results, what geometry an object in the environment has or what class of objects defined by geometric shape it should be assigned to.
A method for classifying objects is known from EP 1 557 694 A1. In the method described there for classifying objects that represent items in the detection area of a sensor for electromagnetic radiation, in particular a laser scanner, based on at least one distance image of the detection area with distance image points, each of which was obtained by transmitting a pulse of electromagnetic radiation and detecting a pulse returned as an echo pulse from a point or area on an object and detecting at least one echo pulse characteristic dependent on the energy of the echo pulse, and each of which is assigned at least one value for a parameter for the echo pulse characteristic, distance image points of the distance image are associated with at least one object and the object is assigned an object class as a function of at least one of the parameter values for the echo pulse characteristic. In this way, pedestrians, passenger cars, trucks, and the like, for example, are to be distinguished from one another.
A method for classifying objects as obstacles and non-obstacles for vehicles is specified by WO 2010/121582 A1, which corresponds to U.S. Pat. No. 8,731,816. The vehicle includes an environment sensor that senses stationary and moving objects in a scene in front of the vehicle and, if applicable, tracks a path of movement of the objects. The method provides one or more observers, wherein an observer classifies an object according to predefined features, and in the case of multiple observers contributes to an overall classification result. An observer senses the path of movement of vehicles in an environment of at least one stationary object and classifies the stationary object as a function thereof. In this way, for example, vehicles on the roadway are to be distinguished from sign gantries that can be driven beneath and the like.
Known from US 201000097200 A1 are a method and a device for identifying and classifying objects. Here, electromagnetic radiation is transmitted by a sensor and the components reflected at an object are received by the sensor. The received signals are analyzed by comparison with stored characteristic values, and classification of the objects is performed on the basis of the analysis. To this end, an analyzer is provided with a memory in which are stored characteristic patterns that are compared with the received signals in order to classify the reflecting objects on this basis. In this way, for example, passenger cars, trucks, traffic signs or discarded cans are to be classified with regard to type.
Known from DE 10 2008 041 679 A1 are a method and a device for memory-based environment recognition. What is described is environment recognition for a moving system, wherein at least one sensor attached to the moving system is used, in which at least one object or feature in the environment of the system is recognized at a first point in time by an imaging method, wherein data of the at least one of these objects or features are stored in a memory, wherein after at least one possible sighting of this at least one object or feature at an at least second point in time a classification of this at least one object or feature takes place with the aid of a comparison of the data stored in the memory. In this way, objects in environments which are traveled frequently, for example, are to be classified.
A method for object classification using a 3D model database is known from DE 103 35 601 A1. What is described is a method for computer-aided classification of three-dimensional objects into one or more object classes, in which an object is sensed by a measurement device. In order to make the method more efficient and reliable, the method described is based on the fact that the measured data are 3D data, that a sensed measured data point cloud of 3D data is compared with stored 3D model data (the archetype of the relevant object class) and matched by variation of the 3D location of the model in space to the measured point cloud, and then the classification in the best-fitting class takes place. In this way, for example, pedestrians, bicyclists, passenger cars and trucks are to be classified with regard to their obstacle or object type.