Motion capturing wearable devices have been released in increasing numbers. Some examples are smart watches, activity trackers, smart glasses, etc. A user can interface with such devices by various methods including physical buttons, touch (virtual) buttons, soft keys, a touchscreen, a touchpad, image sensors, or motion capturing sensors. Furthermore, some devices may be equipped to perform gesture recognition as a way of interacting with the devices. A gesture as used in this disclosure generally refers to a series of movements in time that can be captured by the device using various sensors. For example, the gesture may be performed using an object such as a stylus, a finger, a hand, a wand, or any suitable object. A recognized gesture can cause a device to perform certain action(s) or no action, or represent input information to the device. In general, a gesture recognition system performs gesture recognition based on the raw data obtained from a device's sensor(s) (e.g., motion sensors). The raw data refers to the data obtained from any sensor(s) of the device that has not been subjected to substantial processing or other manipulations related to gesture recognition, and may also be referred to as primary data or sensor data. Raw data based gesture recognition algorithms typically need to recognize gestures with a high degree of accuracy in order to achieve a more natural input from a user's perspective. Existing gesture recognition techniques may use simple machine learning on raw data using well-known techniques such as Hidden Markov Models (HMMs) on an entire input sequence (raw data) of a gesture.