A typical example of such a drone is the AR.Drone from Parrot SA, Paris, France, which is a quadricopter fitted with a series of sensors (three-axis accelerometers and gyros, altimeter). The drone is also provided with a front camera capturing an image of the scene towards which the drone is headed, and a vertically-oriented camera capturing an image of the terrain over which the drone is flying.
The drone may be piloted by a user by means of a remote-control device that is connected to the drone over a radio link. The drone is also provided with an automatic system for stabilized hovering flight, serving in particular to enable the drone to reach an equilibrium point in automatic manner, and once said equilibrium point has been reached, serving to provide the trimming corrections needed for maintaining the fixed point, i.e. by correcting small movements in translation due to external effects such as movements of the air and drift of the sensors. The altimeter, which is an ultrasound telemeter located under the drone, provides a measurement of vertical speed that makes it possible to servo-control the thrust force in order to stabilize the height of the drone. Furthermore, the inertial sensors (accelerometers and gyros) serve to measure the angular velocities and attitude angles of the drone with a certain amount of accuracy, and can therefore be used for dynamically servo-controlling the thrust direction of the drone along the direction opposite to that of gravity.
In order to establish hovering flight, there remains a problem of eliminating the linear speed of the drone. Unfortunately, the low cost accelerometers that are used are generally too noisy to give a satisfactory estimate of the speed of the drone once the signal from them has been integrated twice.
WO 2009/109711 A2 (Parrot) proposes estimating the horizontal speed of the drone from the image of the forward-looking camera by analyzing the succession of images picked up by said camera and identifying the movements of various characteristic points in the image.
Nevertheless, that technique suffers from lack of accuracy, particularly at the slowest speeds (when the drone is moving forwards at low speed, the scene captured presents very little change from one image to the next). It is also extremely dependent on the presence or absence of characteristic points of interest in the scene captured by the camera: when the image is uniform, e.g. showing a wall or the sky, when outside, the quasi-absence of characteristic points make that technique ineffective.
The present invention relies on using the image delivered not by the forward-looking camera, but rather by the vertically-oriented camera, in order to evaluate the horizontal speed of the drone.
Nevertheless, identifying the movement of the various points in the image picked up by said camera remains a task that is difficult and that depends strongly simultaneously on i) the nature of the scene (more or less contrast, changing to a greater or lesser extent); ii) the speed; and iii) the constraints of limiting the complexity of the calculations.
In particular, if it is desired to perform automatic stabilization servo-control while hovering, it is appropriate to have a speed measurement that is simultaneously accurate, sensitive (since linear speeds around the equilibrium point may be very low), and available in real time so that the servo-control can be performed effectively and reactively.
Nevertheless, it should be observed that the invention is not limited to evaluating speed for the purpose of stabilizing hovering flight of the drone, and that it is applicable more generally to all flying configurations of the drone, even with movement values that are close to the maximum speed of the drone (about 5 meters per second (m/s)).
Various algorithms exist that enable a speed of movement to be estimated in a scene captured by a video camera.
A first type of algorithm is the so-called “optical-flow algorithm” on bases that are described in particular by the following:
[1] LUCAS B. D. and KANADE T., “An Iterative Image Registration Technique with an Application to Stereo Vision”, Proc. DARPA Image Understanding Workshop, pp. 121-130, 1981; and
[2] HORN B. K. P. and SCHUNK B., “Determining Optical Flow”, Artificial Intelligence, (17): pp. 185-204, 1981.
Reference may also made to:
[3] MONDRAGÓN I. et al., “3D Pose Estimation Based on Planar Object Tracking for UAVs Control”, Proc. IEEE Conf. on Robotics and Automation, pp. 35-41, May 3-8, 2010,
which describes a multiresolution technique for estimating the optical flow with different resolutions for piloting a drone while landing.
The optical flow method presents the advantage of imposing very few constraints on the scene (little contrast, little content). In addition, by using a “multiresolution” approach, it is possible to estimate both high speeds and low speeds. In contrast, that method is sensitive to rotation and changes of attitude and it does not make it possible to verify intrinsically the quality of the results given, i.e. the algorithm always delivers a result providing enough points present a large gradient, but said result is delivered even if it is meaningless.
To summarize, the optical flow method is an “all terrain” method capable of operating over a very wide range of speeds, but it delivers the result that is not always reliable, nor very accurate, in particular at low speed.
Another type of algorithm comprises so-called “corner detector” or “point-of-interest detector” algorithms the basis of which are set out for example in:
[4] ROSTEN E. and DRUMMOND T., “Fusing Points and Lines for High Performance Tracking”, IEEE International Conference on Computer Vision, pp. 1508-1511, 2005, and
[5] ROSTEN E. and DRUMMOND T., “Machine Learning for High-Speed Corner Detection”, European Conference on Computer Vision, pp. 430-443, 2006.
The corner-detector algorithm is accurate and robust, it takes rotation into account, and it is capable of detecting evaluation results that are aberrant and of eliminating them. Furthermore, its absolute accuracy is constant regardless of speed (unlike the optical flow method), which makes it possible to obtain excellent results, in particular at low speeds, which advantage is particularly appreciable if it is desired to use its results for stabilization and servo-control of hovering flight.
In contrast, that method imposes much greater constraints on the scene in terms of contrast and texture, which means that it is not applicable to all the situations that might be encountered.
Finally, under all circumstances, using the calculation result for servo-controlling certain autopilot controls of a drone requires said data to be available practically in real time, and in any event sufficiently quickly to ensure that the drone can be autopiloted with all the desired reactivity.
This constraint limits the possibility of implementing conventional algorithms that are often designed to be executed on computers having processors that are fast and memory capacities that are large.