User input devices have proliferated into a variety of different kinds that each employ a different mode of communication, or modality. Based, at least in part, on the state of the technology used for each modality, the reliability of the different kinds of user input devices is uneven. As used herein, the reliability of a user input device is a proxy for a confidence that a command received from a user input device reflects the user's intent. As a first example, a physical switch or lever is a user input device that a user must manipulate (and sometimes even break a glass cover to access) in order to provide input. The physical switch or lever employs a first tactical modality and is generally considered a highly reliable user input device. Another tactile modality is employed by a touch sensitive screen. In another example, a speech recognition device is a user input device that a user speaks into to provide input. The speech recognition engine employs a voice modality, and is generally considered to have a lower reliability than input devices utilizing tactile modalities. As is readily appreciated, a variety of other user input devices, with corresponding modalities, are available, each having a respective reliability.
When an input device is configured to provide a command to a system and the command is an “action command,” meaning that it triggers an action by the system, the reliability of the input device may be relevant. Further, commands may be differentiated along a criticality scale between those that trigger rather trivial actions (non-critical commands) to those that trigger actions postulated to affect safety (highly critical commands). As may be apparent, a variety of activities may be considered an “action” responsive to an action command, and commands of higher criticality require more reliable input devices.
A technological problem is presented when a complex system is configured to concurrently receive user input from multiple different input devices of varying reliabilities. In this scenario, the complex system may comprise a plurality of subsystems, each responsive to multiple commands of varying criticality. To prevent these complex systems from triggering an action responsive to an unintentional command, conventional solutions often include a separate reliability component and verification strategy for each input device, which generally requires many interfaces, one dedicated to each user input device. Solutions of this type can be real estate intensive and unfavorably increase testing complexity and quality assurance procedures, any of which can be ominous for platforms that are sensitive to cost and weight.
Accordingly, systems and methods that address these technological problems are desirable. The desirable systems and methods easily expand to support additional input devices, and easily adapt to a wide variety of command destinations, such as subsystems and components. The desirable systems and methods employ command specific verification strategies before transmitting a command to its destination. The following disclosure provides an unconventional solution to these technological problems, in addition to introducing additional novel features.