Many services are offered today on devices of smartphone, smart watch, smart glasses, etc. type, using inertial sensors (accelerometer, gyrometer, magnetometer, etc.) capable of measuring various types of movement. The inertial data from these sensors are used, for example, to determine the execution of a gesture performed by a user with the device. Such a gesture can be performed in 2D, that is to say, for example, on a screen of the device by drawing or writing or in 3D by means of a gesture made in the air by the user carrying this device.
In general, these gestures are recognized by comparing the characteristics from physical measurements collected by the inertial sensors with those from predefined models.
However, on account of the mobile nature of the device, the user is caused to execute these gestures in different context situations and depending on a different mobility. These different mobility situations can impair the analysis of the inertial data that are measured and thereby the detection of the gestures that have been performed with the device.
For example, the action of shaking a mobile phone is recognized more or less if the user performing this gesture is at home, on a bus, in an elevator or in another environment. The analysis of the data may also be different depending on whether the mobility of the user is low or high, for example when he is walking, when he is running or when he is jumping.
The methods of the prior art use data from sensors different than the inertial sensors in order to determine the context that the user is in. For example, in the patent application US2009/0037849, temperature sensors, optical sensors, acoustic sensors and light sensors can be used to determine the context or more exactly the situation that the user is in, in relation to predefined situations. Many other systems use GPS data to determine a location and therefore a prerecorded context (at work, at home, etc.).
Finally, other systems use an interface to present the user with various contexts or situations to select from a predefined list, before implementing gesture detection on the device.
These systems require a multitude of sensors or measuring means that clutter up and complicate mobile devices.
They require a priori knowledge of a list of predefined contexts or situations that is not necessarily suited to all situations.
The selection of a context from a plurality of contexts is not easy for the user, who sometimes has difficulty in defining his context (is he moving slowly, quickly?), especially since his situation can change over the course of time.
There is therefore a need to be able to automatically detect a mobility context in a sufficiently precise manner with less complexity and limited measurements.