With the evolution of computerized environments, the use of human-machine interfaces (HMI) has dramatically increased. A growing need is identified for more natural human-machine user interface (NUI) methods such as, for example, voice and/or gaze and more specifically for hand gestures interaction to replace and/or complement traditional HMI such as, for example, keyboards, pointing devices and/or touch interfaces. Doing so may serve to, for example, eliminate and/or reduce the need for intermediator devices (such as keyboard and/or pointing devices), support hands free interaction, improving accessibility to population(s) with disabilities and/or provide a multimodal interaction environment. Current solutions for identifying and/or recognizing hand(s) gestures may exist, however they are mostly immature, present insufficient accuracy and/or high complexity while requiring high computation resources for extensive computer vision processing and/or machine learning. Such technologies may rely on full hand skeleton articulation and/or complex machine learning algorithms for detection and/or classification of hand gestures which may make such implementations costly and unattractive for integration preventing them from being adopted for wide scale usage.