In many applications, an operator of a system for surveillance and remote tracking of objects controls a remote image sensor via a communication link. Examples are traffic control, border control, search and rescue operations, land surveys, police surveillance, military applications, etc. Operators may additionally request measurements of a remotely tracked object, such as motion parameter measurements and the like.
In general, a system for surveillance and remote tracking of objects comprises a control center at one end and a remote sensing unit at the other end which communicate over a communication link. The sensing unit, with the help of an image sensor, can be used for surveying a scene including one or more objects, and transmitting sensing-data, which includes data that was acquired by the sensing unit or data generated by the sensing unit in relation to the acquired data (e.g. image pictures, object-data characterizing identified objects etc.), to a control center where the images can be displayed on a display for viewing by an operator. Furthermore, the sensing unit can be operable to locate and track a sighted object. The control center provides to the sensing unit control data, including for example, different types of commands, such as track command, zoom-in command, centering command, etc.
According to one possible scenario, in case an operator of the control center decides that it is desirable to track an object in the scene, he initiates a sequence of operations directed for that purpose. The operator can first send instructions (including for example pointing instructions being a type of control data) to the sensing unit which identify the object that should be tracked. The pointing instructions are coarse pointing instructions which are generated manually by the operator and include for example, “move up”, “move right”, “zoom” or similar commands. In response, the sensing unit acts upon these instructions, and directs the image sensor towards the required area.
The operator can send additional control data including for example a lock and track command (including locking instructions) directing a sensing unit to lock on a selected object in the scene. In response, the sensing unit receives the instructions and attempts to lock onto the object indicated in the command.
Once the object has been locked, sensing unit takes over command and commences to operate in response to tracking instructions, which are generated within the sensing unit and are directed for tracking the locked object. The tracking instructions are forwarded to the image sensor which in turn tracks the moving object and keeps the object in the center of FOV of the display, even while the object moves relative to the sensing unit.
In many applications, there is a time-delay between the time when the sensing unit acquires an image of an object, to when the image is displayed on the display located at the control center, and further to the time the corresponding instructions are received at the sensing unit. Factors that can contribute to the delay include for example, signal processing, image compression/decompression, duration of the communication, and/or link bandwidth limitations. Consequently, when taking into account the delayed reaction time of the operator, the accumulated delayed time can be from fractions of a second to several seconds.
Due to this time-delay, the location of the object as displayed on the display at the control center is generally not the current location of the object. The location displayed on the display is the location of the object before the transfer of the sensing-data from the sensing unit to the control center (e.g. x seconds ago). Additionally, by the time the sensing unit receives the control data from the control center and generates the instruction for the image sensor, an additional time-delay occurs, (e.g. an additional y seconds). Consequently, by the time image sensor is instructed to locate the object, the object may no longer be in the same location it was when the image picture was taken over x+y seconds ago.
Clearly, this time-delay complicates the efforts to lock onto the object. The operator has to accurately calculate and estimate the expected location of the object at a time in the future when the instructions arrive at the sensing unit. Only then is the sensing unit directed to the calculated estimated location, and a lock and tracking operation can be initiated.
If the calculation of the estimated location is not sufficiently accurate, the sensing unit will lock onto some other background object and the entire estimate, calculate and lock process has to be repeated. As such, the effect is a continuous feedback control loop with delay, a situation which is liable to suffer from overshoots and instability.
The locking process is complicated even more in case of human input in the tracking loop. Humans do not function well in feedback loops with time-delay and their reactions and directions are less precise than, for example, computer or processor generated instructions.
Publications considered to be relevant as background to the presently disclosed subject matter are listed below. Acknowledgement of the references herein is not to be inferred as meaning that these are in any way relevant to the patentability of the presently disclosed subject matter.
U.S. Pat. No. 7,184,574 discloses a tracking apparatus including a sensor tracker and a control tracker (60). The sensor tracker is connected to a sensor which senses a scene having at least one object therein, the sensor tracker provides sensor movement instructions to the sensor, enabling it to track a selected object. The control tracker is located remotely from and communicates with the sensor tracker. Additionally, the control tracker takes measurements regarding the selected object and provides tracking instructions to the sensor tracker. The sensor tracker then utilizes the tracking instructions to adjust the sensor movement instructions, when necessary.
US Patent Publication No. 2008267451 discloses a method for tracking an object that is embedded within images of a scene, including: in a sensor unit that includes a movable sensor, generating, storing and transmitting over a communication link a succession of images of a scene. In a remote control unit, the succession of images is received. Also disclosed is receiving a user command for selecting an object of interest in a given image of the received succession of images and determining object-data associated with the object and transmitting through the link to the sensor unit the object-data. In the sensor unit, the given image of the stored succession of images and the object of interest using the object-data are identified, and the object in the other image of the stored succession of images is tracked. The other image is later than the given image. In case the object cannot be located in the latest image of the stored succession of images, information of images in which the object was located are used to predict estimated real-time location of the object, and direction command is generated to the movable sensor for generating real-time image of the scene and locking on the object.
EP Patent No. 0423984 discloses a synergistic tracker system which includes both a correlation tracker and an object tracker for processing sensing-data input and for generating tracking error signals. The operation of the synergistic tracker system is controlled by a central processing unit. The system operates by first correlating a reference region image with a portion of a current digitized image provided by analog to digital converter. Secondly, the object tracker provides a precisely defined track point for an object within the current image. The correlation tracker stabilizes and limits the portion of the digitized image that the object tracker must operate upon. Stabilizing and limiting this portion of the digitized image reduces the object tracker's sensitivity to background clutter and sensitivity to a loss of lock induced by sensor motion. The object tracker provides a non-recursive update for the correlation's reference region image. The correlation tracker and the object tracker are used simultaneously and cooperatively so that the strengths of one tracker are used to overcome the weaknesses of the other. This invention provides a greater tracking tenacity, a reduction in tracker angle noise, and a reduction in hardware complexity.
U.S. Pat. No. 7,620,483 relates to a method for guiding from a remote control center a vehicle towards a target object, said remote control center communicating with the vehicle by means of a lagged communication channel, comprising: At the vehicle: (a) Periodically capturing frame images by a camera, assigning to each of said captured frames an associated unique time stamp, and saving within a storage at the vehicle full frame data or partial frame data of captured frames and their associated time stamps; (b) For a plurality of saved frames, sending to the control center via the lagged communication channel full frame data, partial frame data or a combination thereof with the corresponding associated time stamp for each sent frame so that approximate or exact version of the sent frames can be reconstructed and displayed at the control center; At the control center: (c) Receiving said frame data and associated time stamps, sequentially reconstructing frame images from each said sent full and/or partial frame data, and displaying the reconstructed images on a display; (d) Upon marking by an operator at the control center a point on a specific displayed frame, sending coordinates indication relating to said marked point as appearing on said specific frame or on a reference frame available at the control center, and the time stamp associated with said specific or reference frame, as is the case, to the vehicle; At the vehicle: (e) Receiving said coordinates indication as marked and the sent frame time stamp; (f) Given the coordinates indication and frame time stamp as received, fast forward tracing said point or object coordinates from the said frame towards the most recently available captured frame, thereby finding the coordinates of the same point or object as appearing in the most recently available captured frame; and (g) Providing the coordinates of the target point or object within the most recently available captured frame, as found, to an inner guidance sub-system of the vehicle, for enabling it to track said object.