Debriefing is a practice of reviewing and discussing an activity following completion of the activity, for obtaining data about the activity. Following a completion of a mission (or generally any activity) a debriefing takes place for learning the details of the activity (i.e., what actually happened), and for trying to learn lessons from the activity. For example, the participants can be interrogated about the activity and their actions and decisions. The debriefing is directed, amongst other things, at trying to determine what worked well and what could have been done better.
Reference is now made to U.S. Pat. No. 6,053,737 issued to Babbitt et al., and entitled “Intelligent Flight Tutoring System”. This publication is directed to a method for tutoring a trainee in a simulator. The method involves constructing a decision support system, monitoring the trainee flight, comparing the trainee flight to the decision support system, and determining how closely the trainee flight resembles an expert's flight.
Reference is now made to U.S. Pat. No. 7,599,765, issued to Padan, and entitled “Dynamic Guidance for Close-In Maneuvering Air Combat”. This publication is directed to a method for optimizing the conduct of a close-in air combat are disclosed. The method involves providing in real-time a computer-based close-in air combat situation assessment and information analysis. The current situation is assessed according to data obtained from onboard sensors and remote sensors. The system generates a recommendation based on the assessment of the current situation and according to predefined optimal maneuvering formulas (i.e., specific algorithms corresponding to the physical/mathematical formulas operative for the optimal relative offensive/defensive maneuvering during a close-in combat).
Reference is now made to U.S. Pat. No. 8,538,739, issued to Woodbury, and entitled “Adjusting Model Output Events in a Simulation”. This publication is directed to a simulator system. The system receives input data from a user. The system determines reference data based on an original simulation state. That is, based on the original simulation state, an expected next move of the user is determined. The reference data is compared to the input data, and an adjustment amount is determined based on the difference between the input data and the reference data.
Thereafter, an event value is generated via a probability function, and the event value is adjusted by the adjustment amount into an adjusted event value. A next simulation state is then determined based on the adjusted event value and the next simulation state is presented to a user. In this way, direct feedback to the user is provided via each next simulation state, which positively reinforces correct behavior and negatively reinforces incorrect behavior.