Prior techniques used in determining the range between two spacecraft vehicles for automatic rendezvous and docking of such, include vehicle radar, man in loop estimates, global positioning systems, lasers, loran, and video guidance sensor systems for processing optical images in determining range. The video guidance sensor system approach, which is of particular importance here, is based on the concept of using captured and processed images to determine the relative positions and attitudes of a video guidance sensor and target.
One prior art video guidance sensor system uses a pair of lasers of predetermined wavelengths to illuminate a target. The target includes a pattern of filtered retroreflectors to reflect light. The filtered retroreflectors pass one wavelength of light and absorb the other. Two successive pictures or images are taken of the reflected light and the two images are then subtracted one from the other, thereby allowing for target spots to be easily tracked. Such a system is described, for example, in R. Howard, T. Bryan, M. Book, and J. Jackson, “Active Sensor System for Automatic Rendezvous and Docking,” SPIE Aerosense Conference, 1997, which is hereby incorporated by reference.
Another prior art video guidance sensor system uses a CMOS imaging chip and a digital signal processor (DSP) in order to provide higher-speed target tracking and higher-speed image processing. The high-speed tracking rates result in a more robust and flexible video guidance sensor. Because of these faster tracking rates, the video guidance sensor system can track faster moving objects or provide more data about slower moving objects. Such a system is described, for example, in R. Howard, M. Book and T. Bryan, “Video-based sensor for tracking 3-dimensional targets,” Atmospheric Propagation, Adaptive Systems, & Laser Radar Technology for Remote Sensing, SPIE Volume 4167, Europto Conference, September 2000, and in R. Howard, T. Bryan, and M. Book, “The Video Guidance Sensor: Space, Air, Ground and Sea,” GN&C Conference, 2000, which are also hereby incorporated by reference.
An improved video guidance system is described in R. Howard, T. Bryan and M. Book, “An Advanced Sensor for Automated Docking,” Proceedings of Digital Avionics Systems Conference (October, 2001), which is also hereby incorporated by reference. The basic components of the overall docking system include an on-board computer, control software for calculating the correct thruster firings necessary to achieve docking, a sensor for long-range (rendezvous) operations, a sensor for short range (proximity) operations, and grapple mechanism or fixture for allowing the system to be attached to the target vehicle. The video guidance sensor serves as the short range sensor and must be able to handle the transition from rendezvous to proximity operations and to pick up where the rendezvous sensor leaves off.
The improved video guidance sensor is generally based on the tracking of “spots” on known targets as explained below. By tracking only spots, a large amount of image processing/scene interpretation is avoided, and because the spots occupy known positions on the target, the sensor can easily determine relative position and attitude data based on the centroids of the spots.
The basic video guidance sensor system consists of the sensor and the target. The sensor is the active part and includes a pair of laser illuminators which illuminate the target and an image processor which processes the return images. In one preferred embodiment, the target is an arrangement of corner-cube retro-reflectors with associated optical filters.
The improved video guidance sensor currently uses lasers of two different wavelengths to illuminate the target. One of the wavelengths passes through the filters in front of the retro-reflectors on the target while the other wavelength is absorbed by the filters. The system essentially functions by taking a picture of the target illuminated by the foreground lasers and then taking a picture of the target illuminated by the background lasers. The second picture is subtracted from the first, thereby producing a very low-noise image with some bright spots at the target reflector locations. After the image subtraction, the processor (DSP) processes the resulting intensity data, essentially by assembling the pixels in the image which are above a preset threshold into “blobs.” The blobs are then screened for size, the centroids of the blobs are computed and the relation between the positions of the blobs is used to determine whether there is a match between the blob pattern and the known target pattern.
Once the target “spots” are picked out of all of the blobs, tracking windows are established around the target spots and the relative positions and attitudes between the sensor and the target are computed. When the target is close enough to the sensor, tracking windows are established around each of the target spots. The acquisition cycle or mode requires processing the entire image to find the target. The use of tracking windows has the advantage that once the target has been acquired, only the area around the target spots must be processed each cycle, so that processing time is decreased.