The present specification relates to detecting moving objects, for example, detecting moving objects on a runway on which an airplane is to land.
To safely land any aircraft, whether manned or unmanned, the status of the runway needs to be monitored prior to landing, regardless of the lighting conditions. Previous methods for detecting motion on a moving camera include optical flow based approaches described in “Passive range estimation for rotor-craft low-altitude flight,” (R. S. B. Sridhar and B. Hussien, Machine Vision and Applications, 6 (1): 10-24, 1993), “Detection of obstacles on runway using ego-motion compensation and tracking of significant features,” (T. G. R. Kasturi, O. Camps, and S. Devadiga, Proceedings 3rd IEEE Workshop on Applications of Computer Vision, 1996 (WACV '96), pages 168-173, 1996), and “Runway obstacle detection by controlled spatiotemporal image flow disparity,” (S. Sull and B. Sridhar, IEEE Transactions on Robotics and Automation, 15(3): 537-547, 1999). Other methods include background subtraction based approaches described in “Motion detection in image sequences acquired from a moving platform,” (Q. Zheng and R. Chellappa, Proc. Int. Conf. Acoustics, Speech, and Signal Processing, Minneapolis, 5:201-205, 1993).
Optical flow approaches require the availability of camera motion parameters (position and velocity) to estimate object range. In certain previous techniques, the optical flow is first calculated for extracted features. A Kalman filter uses the optical flow to calculate the range of those features. The range map is used to detect obstacles. In other techniques, the model flow field and residual flow field are first initialized with the camera motion parameters. Obstacles are then detected by comparing the expected residual flow with the observed residual flow field. Instead of calculating optical flow for the whole image, these techniques only calculate optical flow for extracted features since full optical flow is unnecessary and unreliable.
In contrast to the optical flow approaches, background subtraction approaches do not need camera motion parameters. Camera motion is compensated by estimating the transformation between two images using matched feature points. Moving objects are detected by finding the frame differences between the motion-compensated image pairs. Optical flow methods may not be able to detect moving objects if the scale of the moving objects are small.