The disclosure relates to a method for the target recognition of target objects, in particular for the target acquisition of operating elements in a vehicle.
A plurality of systems and methods can be gathered from the prior art which make possible a contactless interaction with virtual elements for users via graphic user surface, also designated in short as GUI for “graphical user interface”. Such systems are used in technical areas in which a contamination by a direct contact with operating elements is undesired or in which operating elements lie out of range for a user.
Therefore, a control system for a vehicle is known from US 2014/0292665 A1 in which the control of operating elements is based on a camera-based following of the viewing direction, also designated as “gaze-tracking” or “eye-tracking” and on a camera-based or sensor-based recognition of gestures of the user. In order to control the operating elements, the following of the viewing direction and the control of gestures can be combined with one another, in which a processor unit or software recognizes an operating element fixed visually by the user. As such, an activation of the operating element or a release for further actions/functions of the operating element does not take place until a gesture recognition module recognizes an appropriately associate-able hand movement or finger movement of the user. The target acquisition of operating elements is therefore based substantially on following the view of the user without a geometric relationship to positional data of a performed gesture being required.
An improved target accuracy in the recognition of virtual elements may be achieved with the technical teaching disclosed in WO 2015/001547 A1. It suggests a method for the target recognition of virtual elements, wherein a spatial relationship between the direction of view and the indicating direction of an indicating gesture of the user is taken into consideration. The direction of the indication of a finger of the user is determined with a 3-D image recording device and compared with the viewing direction of the user. In as far as the alignment of the two directional vectors viewing direction and indicating direction takes place within a tolerance range an interaction with a virtual element or the contactless movement of the cursor via a display device can be realized by the user. However, the recognition of the three-dimensional relationship between the viewing direction vector and the directional vector of the indicating gesture requires a relatively high computer performance.
Another technical teaching that takes into account the combination of the viewing direction and the indicating direction as well as their spatial relationship in the recognition of targets of virtual elements is apparent from US 2014/184494 A1. It also suggests a solution for determining intersections of the directional vectors of the viewing direction and of the indicating direction with a representation plane for virtual elements.
The systems known from the prior art have the disadvantage that a successful target recognition of virtual elements must be preceded by an expensive calibration in which several individual objects must be aimed at by the user.