As technology progresses, modern combat vehicles will be driven autonomously as much as possible, with manual intervention called for only in critical moments. During autonomous driving, the operator may view video camera returns of the external scene that are projected along with multifunctional displays on large screen monitors mounted within the vehicle. As well as the driving scene, the monitors commonly shared by several operators may show different display windows depending upon the function, such as tactical maps, system status, and situational awareness as organized by an on-board electronic display driver. This is especially true for a vehicle operated as a control station for remote unmanned air and ground vehicles due to multiple tasks required to manage the systems. During autonomous driving, the operator of a tactical vehicle may be performing multiple tasks monitoring and controlling other operations from the on-board vehicle displays. Although many of these tasks are automated with an electronic associate in the form of embedded computer programs, there are times during critical events when the automation will defer to the human operator for operation of the vehicle. Because of the limited display space within the vehicle, the display formats will be economized depending upon the needs of the task for monitoring or engaging manual control.
As examples of the complexity and need for economizing display space, modern combat vehicles utilize computerized system level electronic associates providing course and tactical advisories including path planning based on terrain, the tactical situation, and the system capabilities. These systems may involve displays for processing multiple control tasks in vehicle controls, tactical evaluation and decision making, system status, and communications needed for complex system performance. These manned vehicle designs may incorporate visual imaging systems with multifunctional displays in the crew stations to both operate the vehicle and control subordinate unmanned air and ground robotic elements. Depending upon the task being performed, the imaging system will visually display the scene that is external to either the host vehicle or the robotic element. The scene images will be collected by sensors mounted on the exterior of the vehicle, and for robotics operations, radioed back to the host vehicle. The display system will show computerized digitized images acquired by the sensors. The crewmember will see a selected portion of the computerized display buffer that depends upon his or her task and viewing direction. No doubt future imaging systems will appear to the crewmember of the host vehicle as “see-through armor” by incorporating virtual reality components for the seemingly direct viewing of the external scene. In this case, the crewmember may be supervising the autonomous driving or flying of the vehicle, operating the vehicle when called upon by the electronic associate for obstacle avoidance, or monitoring the scene for targets in the local area.
Incorporated with the scene displays are computer driven multifunctional displays of tactical situation maps, systems status, control status, and communications. The crewmember uses the displays to supervise and interact with the electronic associate programs that plan and coordinate the control and communication functions needed to perform a mission. These functions include planning and monitoring the advance of the host vehicle and the semi-autonomous robotics elements, maintaining tactical awareness, seeking and engaging targets, monitoring the system status of the host vehicle and the robotics elements, and composing and sending status reports including spot intelligence reports to higher headquarters.
In regard to robotics functions, the crewmember may be searching for targets on the display of a RSTA sensor return from unmanned air or ground reconnaissance vehicles, supervising the assignment of fire missions among armed robotics elements, confirming the approach routes assigned by the electronic associates, and monitoring the battery, fuel, and ammunition status of the vehicles. Furthermore, in those cases where the crewmember has rejected the plan proposed by the electronic associate, he or she will be interacting with the program to supervise the refinement. Finally, in those incidents where the ground robotic element cannot navigate further along the designated route possibly because of terrain obstacles, the crewmember may be temporally called upon to tele-operate the robotic vehicle from the onboard display of the remote vehicle camera return.
The technology for autonomous vehicle driving is well established, using Google's self-driving car as an example; course selection, obstacle avoidance, and driving control are all built into the vehicle. Driving course selection is automated with a roadway mapping data system combined with an external data feed on tactical constraints and a Global Positioning System (GPS) for locating the vehicle relative to the terrain mapping. Concurrently, obstacle avoidance is maintained by an array of technology including a movement-detection radar for distant viewing, an on-board laser detection system for immediate distance viewing, and a video camera for panoramic viewing of the external scene about the vehicle; the accumulated driving scene data is processed with image processing software for driving hazards and integrated with a self-driving control program for hazard avoidance. However, there may be critical times when the automated processes will receive insufficient data for proper functioning, and the automation will defer to the human operator for operation of the corresponding particular tasks.
Therefore, because of the limited display space within the vehicle, the display format will depend upon the features of the task. In particular, the display window size for the driving scene can be reduced during monitoring of autonomous driving to accommodate other displays, by, for example, scene compression coupled with panoramic camera field-of view. However, these display characteristics impact the driver's natural awareness of the vehicle speed and therefore driving performance during manual intervention, thereby necessitating the need for a means to control display size and camera field-of-view for compatibility of the display with the controls used in the driving task. For example, when elements of the autonomous driving are suspended with manual intervention called for in critical moments, the driving scene characteristics may be adjusted to optimize the called for task. In particular, such adjustments may be made for setting the perceivable road speed at a level that generates a cognitive flow rate in the operator that is compatible with the control dynamics needed for the task. For example, different settings will be needed for such sundry tasks as driving on an undetermined course, maintaining driving environmental awareness including detecting obstacles, evaluating obstacles, circumnavigating obstacles, navigating tight course turns, or parking the vehicle; with each such successive task requiring increased speed awareness and display/control compatibility for optimal operation.