The next generation of warfighters will operate in highly complex and dynamic battlespace, which increase the demands placed on human operators and/or pilots. One factor contributing to the complexity of future battlespace is manned unmanned teaming (MUM-T) operations. MUM-T operations describe a scenario in which a manned operator (e.g., in an airborne platform or on the ground) is controlling one or more unmanned platforms (e.g., unmanned vehicles).
Traditional avionics interfaces and interaction control methods are not satisfactory for managing and facilitating MUM-T operations. Pilots are currently unable to effectively control own-ship and multiple autonomous unmanned aerial system (UAS) assets within a battlespace.
Aviation operators (e.g., ground operators and airborne pilots) are currently task saturated due to the high demands of the operators' roles. Operators must manage multiple sensor information feeds and vehicle interfaces to perform mission responsibilities. MUM-T will require operators to assume new roles in addition to performing existing tasks. Currently, operators lack intuitive interfaces, which would enable the operators to manage the new responsibilities without a significant increase in workload.
Many of the existing pilot vehicle interfaces require pilots to perform head down data entry for extended periods of time. Such head down data entry redirects the pilots' focus from looking out and managing the battlespace environment to focusing inside the cockpit. Operators currently lack methods to enhance situational awareness and manage teamed assets in MUM-T operations.
Existing head wearable devices do not offer intuitive interaction methods to engage with any virtual content visualized on the device. Existing head wearable devices lack interaction and control methods to select, manipulate, and provide inputs to the computer generated content on the head wearable device.