1. Field of the Invention
The present invention relates to the field of predictive user modeling, and more particularly, to the use of predictive user modeling in connection with the design of a user interface.
2. Description of the Related Art
User interfaces appear in many aspects of life today. Simple examples include elevator systems that “speak” instructions, game controllers for video games, and even something as simple as a computer keyboard.
A more complex example is the OnStar telematics systems available in almost all GM vehicles today. The field of telematics is often considered a cross between communications and computer systems. In the OnStar system, a computer, a wireless connection to either an operator or data service like the Internet, and a global positioning system (GPS) are used in a coordinated fashion to provide a driver of the vehicle with the ability to use an interface (e.g., press a button) in the vehicle and be connected to an operator who knows the exact position of the vehicle based on the GPS system. The driver may talk with the system operator and activate controls in the vehicle, which activation can be sensed by the system operator, and/or the system operator can perform functions remotely (e.g., unlock doors) with or without the driver's request.
From the perspective of the driver, the interface comprises a push button in the car, a communication system (microphone and speaker) and could also include visual indicators (e.g., lights) indicating various functions or operations and/or display screen for displaying text, video, etc.
Considerable knowledge has accrued about user interface design and preferred user interfaces. Typically, user interfaces are static, that is, they are designed ahead of time and, once implemented, cannot be changed. Thus, designers must anticipate, in advance, the needs of the interface user and then provide interface elements to accommodate these needs. If, during the use of the interface, a new interface element that would be helpful is identified (e.g., if a user using a push button to connect with an OnStar operator determines that a voice-activated, hands-free mechanism would be more desirable), significant redesign must take place (software, hardware, or a combination of both) to implement the reconfigured or new interface. In other words, modification to this type of user interface cannot occur on the fly.
There have been some attempts to enable “pseudo-dynamic” modification of user interfaces to match different users' needs. U.S. Pat. No. 5,726,688 to Siefert et al. discloses a predictive, adaptive interface for a computer, wherein a user's interaction with the computer is monitored, and future interactions are predicted, based on previous interactions. The invention adapts the interface to the user's preferences, using the predictions. For example, if a particular user repeatedly selects one option from a given menu, the invention detects this repeated selection, “predicts” that the user will not select other options, and adapts to the user's selection, based on this prediction, by eliminating other options from the user's menu. These attempts are called “pseudo-dynamic” because they are still based on predetermined, anticipated changes (e.g., menu-driven modifications, categorical modifications, etc.), but appear to the user to be dynamic. Although to the computer user it appears that the interface has been dynamically modified to personalize it for that user, in reality the system was merely programmed to recognize the use of the computer by a particular user (e.g., via a logon process) and present an interface known in advance to be desired by that user, while excluding those that appear to be unlikely to be used by that user.
Other methods exist, including responsive information architecture (RIA) and evolutionary systems utilizing genetic algorithms (see, for example, “Creative Evolutionary Systems,” by Peter Bentley and David Corne, 2002 Academic Press, San Diego, Calif.). All of the solutions known to applicants, however, rely on previously created modifications to the user interface, i.e., they cannot create a new interface on the fly. Further, none of the prior art systems incorporate user emotional and mental states in the determination as to a particular interface to present to the user.
Accordingly, it would be desirable to have a mechanism to automatically generate an interface, on the fly, from underlying abstract user models, interface prototypes, and current, just-measured data. Optimally, such a system would monitor the user's state (using biometric techniques, for example) and would also determine how that particular user is likely to react to different interface modifications given different situations and environmental conditions, and then create or add to an interface based on the monitored parameters and determined reactions.