The rapid advance in smart phones and tablet devices has led to a corresponding expansion in the number and quality of gaming applications that can be played on portable devices. Now, the portable computing field is expanding into new classes of devices that allow for virtually hands-free participation. Such devices, for instance worn as glasses, rely upon precise measurement of head movements using precise miniaturized measuring devices within such devices, such as gyroscopes, accelerometers, and compasses. These head movements serve as commands within applications that run on such devices.
With the successful introduction of such promising hands-free devices, one problem facing the electronic gaming industry is how to port highly developed gaming applications, each with a broad customer base, to hands-free devices. Users of such gaming applications on conventional devices (e.g., desktop computers) are familiar with all the conventional hand movement type commands necessary to use and excel at such games. But hand movement type commands do not directly port to devices that port to virtually hands-free participation. This forces the user to learn a whole new set of commands in the form of head movements.
Such hand-free devices also provide the opportunity for developing gaming applications for which there are no counterparts running on conventional smart phones and tablets.
Regardless of its origins, the rules for controlling each game must be reduced to a set of head movements. This poses a challenge to the gaming enthusiasts interested in hand-free devices. As noted in a famous quote attributed Albert Einstein “You have to learn the rules of the game. And then you have to play better than anyone else.” Yet, with a gaming application on a hand-free device, not only must the gaming enthusiast learn the “rules” of each gaming application, the user must also learn the precise head movement that corresponds to each rule or command. For example, if an affirmative yes/no type head nod is assigned to one game command, the user must not only know the game command, but also that a yes/no type head nod/shake is used to enact this command, and furthermore must train for a while to learn precisely the best way to enact the yes/no head nod/shake in order to reliably communicate the command. For instance, how far to nod, how quickly to shake, and so forth. Moreover, a hands-free game may have several stages, and the user must learn the head commands associated with each stage.
Thus with new hands-free technology, in which head movements serve as gaming commands, comes a burden on users that has not been addressed by the art: how to teach a user the rules of a game. Thus, as the above background makes clear, what are needed in the art are systems and methods for informing users how to use gaming applications on hand free devices.