This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Nowadays, it is possible to control devices by the use of a control user interface displayed by a wearable computer with a head-mounted display. For example, such possibility is depicted in the document US20130069985, especially in the FIGS. 9 and 10 which explain how to open or close a garage door by interacting with a virtual control interface according to a detected movement (see §0048 and §104 of the document US20130069985). Such detected movement can be a head motion, a voice command or any gesture that corresponds to a control instruction associated with the control interface.
Such validation process is just one way among others to interact and control a target device. Indeed, other techniques are disclosed in some documents of the state of the art. More precisely, the technique of the document US20030020707 consists in superposing a view of a virtual object (displayed through a head-mounted see through display (also named a HMD) associated to an action (for example a “read”, “stop” commands) to a real object (the target device), in order to generate a control instruction for the real object (or target device).
The technique of the document GB2465280 is based on the detection of a user's finger in order to identify an object of interest to be controlled. Such technique can be assimilated to the one disclosed in the document US20130069985.
The document US20120092300 discloses also a head mounted see through display device that can display a virtual keyboard comprising virtual touch. Here again, when a camera comprised within the head mounted see through display device detects that the finger of a user is positioned as to be on a virtual touch, it activates an action associated to the virtual touch, and the target device performs such action.
However, all these techniques have a common drawback. Indeed, in case of a sudden movement (let's say a bee flies in front of you, and you want to make it leaves by a sudden movement with your hand), you could validate a control operation associated with such a movement on a displayed interface without having the purpose to do it. The disclosure aims to solve such issue.
In order to solve such issue, one skilled in the art, starting from the teachings of the document US20130069985, considered as the closest prior art, and trying to prevent the occurrence of an undesired validation, would have used a double validation technique (e.g. a kind of double-click) consisting in performing at least two times a same movement (or gesture), or a same voice command.