1. Field of the Invention
The present invention is directed generally to distributed interactive simulations, such as those employed in military simulators and multi-player computer games that present a coherent virtual world across a plurality of operator/player stations.
2. Description of the Related Art
Distributed interactive simulation is a popular form for present-day video games, but has origins in military simulations such as SIMNET in the 1980's and 1990's. In distributed interactive simulations, a virtual world is created and shared among a plurality of computer stations, each supporting at least one user with controls and at least one display.
In some simulations, stations share a distributed clock providing a common timebase for each, for example the well-known Network Time Protocol (“NTP”). However, this is not required. The distributed simulation progresses, generally (but not necessarily) periodically (e.g., 30 updates per second), and generally (though not necessarily) at the same rate at each station (e.g., other stations might run at 60 updates per second).
The stations also share a model of the environment, including airspace and terrain. The terrain may be static, including landforms, buildings (which may include interiors), and bodies of water. Alternatively, the terrain may be non-static: e.g., some or all buildings may be damaged, landforms may be “scarred” (e.g., with tire tracks, craters, or burn marks), and the like. Within this environment, simulated dynamic objects are placed including, for example, vehicles, people, animals. Animation of these dynamic objects gives the appearance of life to the simulation.
In such simulations, each station bears primary responsibility for managing one or more simulated objects. For each object managed by a station, a detailed model is computed for each local time increment to determine its behavior. For example, the detailed model for an all-terrain vehicle (“ATV”) might accept steering and pedal input from an operator (generally one local to the managing station). The detailed model of the ATV might carry out computations to simulate an automatic transmission, the interaction of the ATV's suspension system with the terrain, traction between the tires and the terrain surface, and perhaps fuel consumption, engine overheating, or other details and modeled failures. Generally, while crucial for adequate realism, in exchange for efficiency, modeling at this level of detail need only be computed by the managing station, with the results from the detailed model being published to the other stations in the distributed interactive simulation.
Note that the operator of the ATV might be a human player, or the operator can be an artificial intelligence program (“AI”) simulating an ally or enemy where another human player is not available or required. When used, an AI is effectively just a further component of the detailed model; the managing station maintains the status and performs the incremental simulation required by the ATV-driving AI, but the remote stations only receive the results.
In some cases, objects being simulated may be complex and articulated (as with human, animal, or alien figures) requiring complex animation by a kinematic model, but others objects may be relatively simple (e.g., a crate or projectile), in which case a simpler ballistics model may be used.
However, there are issues with sharing the results of the detailed modeling, also called updates, from the managing station. For example, it takes time to distribute the updates to the remote stations that do not control the object; and because of this latency, an update to a model is always somewhat “old” information. In a simulation where a distributed simulation clock is correctly set at all stations, any update will be timestamped at some simulation time in the past, though generally recent. However, a rigid time keeping system can introduce resonances into the models that result in visual artifacts. Additionally, maintaining accurate clocks is sometimes a source of unnecessary complexity, and occasionally, error. Further, over an unreliable network, updates may be lost or delayed and arrive with irregular latency. Additionally, bandwidth constraints and the number of objects in a simulation may limit the number of updates that can be sent for each object, such that updates are not provided to remote stations as often as the managing station computes the detailed simulation.
At each station, the display presents the virtual world to the player. The display is generally refreshed more often than updates arrive for remotely managed objects, yet to only change the display of remotely managed objects as often as updates arrive would make the remotely managed objects appear jerky and unrealistic. To alleviate this, objects updates are associated with the time to which they correspond and the recipient of the updates can extrapolate how to display the object at times after the update.
To achieve this, the update must represent the state of the object at a particular time. The state may include, for example, the position and orientation of the object (generally, though not necessarily, including six coordinate axes: X, Y, Z, roll, pitch, and yaw), and other properties that have different values (e.g., whether the headlights are on, whether a vehicle's engine is smoking, the time at which a live grenade will detonate). The particular time corresponds to when the state was current.
Exactly what an object's state includes depends on the nature of the object. For complex articulated objects, e.g., an animal, the state may include articulations of the object's skeleton, or an index into an animation cycle summarizing such articulations.
In some cases, an object may have multiple modes of operation, e.g., when an “animal” object is alive, it may be driven by an animation cycle, but when the animal dies, as in a hunting simulation, the mode of operation of the articulated body becomes an relaxed kinematic model. For example, the body goes limp, though the limitations imposed by the skeleton remain in control. For those objects having multiple modes of control, the state may further include an identification of which mode of control is being used.
Herein, “state” may include many properties other than just physical position, orientation (rotation). Some of these, such as the index into an animation cycles, walk cycles, etc. may be usefully extrapolated for prediction. However, some properties, such as whether a vehicle's headlights are on, or the horn honking, are only trivially extrapolated. For example, once the lights are on, they remain on until an update says they turn off.
A state may also include information useful for extrapolating subsequent states with improved accuracy, for example velocities on linear or rotational axes, and accelerations on linear or rotational axes. While an extrapolation can be made without such hints, for example by deriving velocity as the difference in position over the last two updates divided by the difference in time of the last two updates, to provide explicit velocities or accelerations can improve the results.
Herein, “status” may include state information, and/or one or more calculation results obtained with respect to state.
For each iteration of the simulation for an object managed at a station, a new state results from operator inputs and detailed model execution, including velocity and/or acceleration values, if provided for any of the various degrees of freedom in the object.
However, the state of an object is sent from the managing station to each of the other stations less often than once per iteration. For example, if a managing station were to update a model of an object thirty times per second, updates might only be sent to other stations five times per second, or even less frequently (e.g., twice a second) if the object is unimportant, far away, exceptionally consistent, or only slowly changing, or if there are many objects to be updated and/or communications bandwidth is highly constrained.
Based on updates received from the managing station, extrapolation techniques provide the best available information concerning the status of a remotely managed object, and the best available information for predicting its movements (at least, for the immediate future). Still, jarring discontinuities in the apparent motion can occur when extrapolated states have substantially overshot or undershot the state described in a subsequent update. Often this comes because an operator has made a sudden turn, jammed on the brakes, dodged or swerved to avoid a collision, etc. that extrapolation from an earlier update does not anticipate.
In simulation parlance, providing the best estimate of the object's current state is the job of a “predictor,” and it is the job of a “corrector” to hide the apparent discontinuity of the extrapolated states in a way that is as aesthetically pleasing as possible.
Unfortunately, the predictor-corrector systems applied to date fall short of an aesthetically appealing, seemingly realistic behavior for a wide assortment of object types, or for the same object type, but under a wide variety of conditions. The result is unpredictably jerky motion or behavior of a remotely managed object. This can make targeting difficult and frustrating, for simulations where aiming at and shooting objects are key objectives. Similarly, it can make following remotely managed objects difficult and frustrating in a driving or flying simulation. Accordingly, there is a need for a better way of presenting the movements of a remotely managed object in a distributed interactive simulation. The present application provides this and other advantages as will be apparent from the following detailed description and accompanying figures.