The present invention appertains to the field of gestural interactions. Gestural interactions are based on interpreting the gestures performed by a user on a touch-sensitive surface or in a graphical interface. The surface or the graphical interface can be that of a mobile tablet, PDA (for Personal Digital Assistant), mobile terminal, micro-computer, etc.
On a surface, a gesture can be performed with a finger or a stylus. In a graphical interface, a gesture can be performed by way of a pointing peripheral such as a mouse or a touchpad.
A gesture corresponds to a continuous physical action. A gesture begins, for example, when the user touches a surface with a finger, continues when the user slides his finger over the surface and finishes when the user lifts the finger. For a pointing peripheral such as a mouse, the gesture begins when the user presses a button on the mouse and finishes when this button is released.
A gesture is defined by a trajectory parametrized over time and in Cartesian space (abscissa, ordinate) or possibly in a larger space including other parameters such as for example the pressure or the angle of the gesture.
Two categories of gestures are distinguished:                asynchronous gestures,        synchronous gestures.        
In the case of asynchronous gestures, the trajectory of the gesture is interpreted globally, as a graphical shape, and determines a command to be executed. This command to be executed usually corresponds to a keyboard shortcut, that is to say to a combination of keys which once depressed run a function. This command can also correspond to the execution of a program.
For example, in certain applications using Windows (trademark) as operating system, it suffices to draw an upward stroke in order to copy, a downward stroke to paste, a U from left to right to cancel an action, a U from right to left to restore an action, a W on the desktop to run Word (trademark), an N to create a new document, an S to save, etc.
The graphical shapes thus recognized and associated with a command belong to a finite set. These shapes are generally complex and often inspired by hand-written characters.
The execution of the command associated with the shape of the gesture is performed only at the end of the gesture, once the shape has been recognized, hence the concept of asynchronous gesture.
A synchronous gesture corresponds to a direct action on objects situated on a surface or in a graphical interface. Action is understood to mean, for example, a selection, a displacement such as a “drag and drop”, a modification of appearance such as a zoom, or indeed the drawing of an object. By way of examples, the term object denotes an icon, a menu, a file, a Web page, a multimedia content, etc. This type of gesture generally involves a pointing peripheral.
In the case of the displacement of an object such as a “drag and drop”, the action begins with the selecting of the object by pressing a button on the pointing peripheral. Then, keeping the button of the pointing peripheral pressed, the action continues with the displacing of the object on the surface or in the graphical interface. The action terminates when the button of the pointing peripheral is released.
Usually, only the instantaneous position of the pointing peripheral is interpreted, in particular the start and the end of the gesture. The global shape of the trajectory is not recognized.
The action on the object is immediately visible, hence the concept of synchronous gesture.
It is rare to have applications which implement gestural interactions based both on asynchronous gestures and on synchronous gestures.
Nevertheless, when such applications exist, for one and the same surface or for one and the same graphical interface, the user must act so as to explicitly mark the change of category of gesture, thereby complicating his task. Usually, this action requires that a button be kept in the depressed position on the surface or on the pointing peripheral.
In patent application US2006055685, the authors attempt to solve this problem by proposing a particular set of asynchronous gestures, that is to say gestures corresponding to commands to be executed. These gestures are designated by the expression “flick gestures” in the document.
From among the asynchronous gestures, this solution retains only gestures which are simple and easy to recognize in a graphical interface implementing a pointing peripheral. Specifically, instead of having a complex specific shape associated with a command, each gesture of the set of “flick gestures” is characterized by a rectilinear shape and a high execution speed. Thus, the recognition of such a gesture and the distinguishing of such a gesture from a synchronous gesture are based mainly on a speed criterion. Additionally, it is no longer the shape of the gesture which determines the command associated with this gesture but its direction, the latter being rectilinear. There are as many commands as directions defined by the system.
This solution exhibits the drawback of limiting the number of possible asynchronous gestures and therefore the number of associated commands.
The use of the direction criterion to determine a command is rather impractical for a user in a situation of mobility on account of the rather imprecise orientation of the terminal assumed to be held in the hand.
The use of the direction criterion demands greater visual attention on the part of the user while a criterion based on shape is less constraining.
Consequently, despite the solution proposed in patent application US2006055685, the requirement still exists to be able to combine synchronous gestures and asynchronous gestures in one and the same application, without constraint for the user and while preserving the richness of representation of the asynchronous gestures.