It is known to equip vehicles with radar systems and/or cameras systems in order to characterize the environment surrounding a vehicle. Such systems are able to provide detection of objects in the vicinity of the vehicle, in particular in a forward-looking direction. So, processing of image data and/or radar reflection data allows objects in the vicinity of the vehicle to be detected and characterized. Such object identification or detection can be used to detect static objects such as guardrails, walls, trees, boundaries, posts or stationary vehicles for example, or moving objects such as other vehicles or pedestrians. This data can be processed and used to provide e.g. a boundary (line) or contour line(s) where vehicle movement is to be prohibited in order to prevent collision.
In such Advanced Driver Assisted Systems (ADAS systems) both cameras mounted on the vehicle and/or antenna arrays may be used to detect/identify and characterize such objects. Typical vehicles with ADAS systems are equipped with an antenna unit/array and a receiver unit adapted to detect radar reflections (returns) reflected from objects. These radar reflections are also referred to as detections. In this way, the surrounding environment may be characterized and objects detected. Alternatively, or additionally cameras may be used to capture images of the environment and stationary or moving objects identified from these using known image processing techniques. It is often necessary to also distinguish between different types of objects (such as other vehicles, pedestrians or other objects), and also whether these are moving or stationary.
The data which is processed to determine objects in the vehicle vicinity may be derived from both radar and camera data. Such data is often referred to as multisensory fusion of data provided by the camera(s) and radar(s).
Objects may be classified into classes or groups such as for example vehicles, cyclists and pedestrians. Such objects are regarded as single entities referred to as single objects (SOs).
A problem is sometimes the sheers number of objects that need to be processed. This introduces a bottleneck; having a large number of such objects that need to be processed in a usually limited time, in hardware with limited computational resources. The problem is also only a limited amount of data can be sent over data buses between two or more computational units in a single system.
The problem becomes especially apparent in the urban environment, in which the number of SOs (pedestrians and other vehicles) is usually much larger than, for example, on highways. The number of such objects is also usually higher than can be handled by current ADAS systems, due to hardware limitations. Furthermore, changes in parameters of such objects (such as speed/direction) are much more unpredictable and dynamic in the urban environment. For example, pedestrians can suddenly change their motion; direction and speed. From, a set of SOs, algorithms must quickly select the most important objects, usually based on the safety-related criteria, and focus on them in detail. This task is usually not trivial taking into account, for example, delays introduced by filters that are used to enhance the parameters of the objects. One of the common problems is the possible loss of the confidence due to overlapping or joining two or more SOs. It is common in the urban environment in which objects are located close to each other. Additionally each SO needs to be characterized by a set of parameters (including positions and velocities) that have to be kept in memory.