Before outlining the characteristics of prior art systems and methods and describing the systems and methods that are specific to the invention in greater detail, it is appropriate to begin by briefly summarizing the main characteristics of the above-mentioned photogrammetric technique, with reference to FIG. 1 accompanying the present description.
Photogrammetry is a technique for determining the shape, the dimensions, and the position of an object, on the basis of perspective views of said object recorded using photographic methods.
In the context of the invention, the term “object” should be considered in its broadest meaning: it may be constituted by an object proper, or else, for example, by a scene in space (landscape, etc.).
In the example shown in FIG. 1, the object 1 in question is a jug. The coordinates of each point, e.g. P1 (x1,y1,z1) and P2 (x2,y2,z2) of the object 1 can be referenced in an orthogonal frame of reference XYZ.
A three-dimensional photograph of an object is based on a perspective cone constituted by the set of rays converging on a viewpoint (e.g. O1 or O2) from each point on the surface of the object (e.g. P1 (x1,y1,z1) and P2 (x2,y2,z2)) and intersecting a plane (e.g. S1 or S2). A perspective cone is defined by the position on a photograph (plane S1 or S2 in the example of FIG. 1), of the foot (e.g. PP1 or PP2) of the perpendicular dropped from O1 on S1 or from O2 on S2, of length C1 or C2, respectively. Thus, if two cones are known relating to two different viewpoints O1 and O2 from which two photographs S1 and S2 have been taken, the position [x1,y1,z1] of point P1 of the object 1 can be defined as the locus of the points of intersection of corresponding pairs of rays O1P11P1 and O2P12P1, where P11 is the point of intersection of the ray P1O1 with the plane S1 and P12 is the point of intersection of the ray P1O2 with the plane S2. Similarly, the position [x2,y2,z2] of the point P2 on the object 1 is defined as the locus of the point of intersection of corresponding pairs of rays: O1P21P2 and O2P22P2, where P21 is the point of intersection between the ray P2O1 and the plane S1 and P22 is the point of intersection between the ray P2O2 and the plane S2. The distance D is the distance between the points O1 and O2.
To obtain good quality measurements and to enable the image to be processed automatically in a manner that is fast, accurate, and reliable, it is necessary to have available the precise characteristics of the photographs whose principal characteristics are listed below:                the locations of the images: it is essential to know the coordinates of the image planes (S1 and S2), their inclinations in three dimensions (azimuth, elevation, and roll angles), and the positions of the centers of the images; and        the optical characteristics of the camera. An accurate photograph is defined as being an exact central projection with its projection center being situated at a distance referenced c below from a “principal” point. The parameters of the corresponding simplified mathematical and geometrical model, i.e. the principal distance c and the image coordinates of the principal point, referred to below as PP(î0, ç0), are referred to as the internal orientation elements. This ideal representation is nevertheless not an accurate reflection of reality. Account needs to be taken of errors caused by the lenses, the camera chambers, and the photographs themselves used in association with the camera in order to obtain the required high level of accuracy.        
From the above, it is important to know with precision:                the coordinates of the main point PP(î0, ç0) on the images: it is important to define these coordinates since the main points serve as the point of origin in the frame of reference of the image. This frame of reference is essential for defining the projections which are to be used for establishing the positions of photographed objects;        the errors due to the optical system: these errors degrade measurement quality (chromatic aberration, spherical aberration, coma, astigmatism, field curvature, distortion, etc.);        the aperture of the lens: the smaller the aperture of the lens the greater the precision of the points in the image, the image is said to be “sharp”. During photogrammetric restitution, it is necessary to have a projection center, and its size can be taken as being a point (in theory only a single light ray can pass). Unfortunately, the size of the projection center depends on the size of the aperture while the picture is being taken. It is therefore necessary to minimize aperture size while taking pictures, to a value that is as small as possible while nevertheless allowing enough light to pass to illuminate the image;        the principal distance: to prepare the step of modeling by photogrammetric restitution, it is necessary to know accurately the principal distance c for each image. Unfortunately, this distance varies during focusing. In order to obtain a sharp image of a near object, it is necessary to vary the distance between the lens and the camera sensor, for example a charge-coupled device (CCD) type semiconductor sensor. Similarly, if there is a change in the focal length, c is modified;        outlining: one of the traditional difficulties in the technique of photogrammetry is the problem of outlining the objects to be modelled. Outlining is constituted by the action of correctly analyzing the outline of the object to be modelled. The photogrammetric technique gives rise to errors of parallax if measurements (of position, etc.) are not taken with sufficient accuracy. This has a great effect on the outlining of objects. There can be confusion between the points on the outline of the object and points on objects that are in fact situated behind the looked-for outline. To solve this problem, it is useful to determine the differences in field depth for the various objects to be modelled; and        number of images: the greater the number of characterizing images, the finer the accuracy of measurements after the stage of photogrammetric reconstruction.        
Photogrammetry finds applications in numerous fields which can arbitrarily be subdivided into four main categories as summarized briefly below:
1. Very large-scale applications: for applications of this type, it is a question of making three-dimensional measurements over extents ranging from infinity to several kilometers. This is essentially the kind of photogrammetry implemented by artificial satellites, e.g. orbiting around the earth. Ground surface relief is reconstructed by the photogrammetric technique. Scale lies typically in the range 1/50,000 to 1/400,000. The term “space-grammetry” can be used. The most usual applications are as follows: map-making, surveying, earth sciences (studying ice; avalanches; air flows; swell; tides; etc.).
2. Large-scale applications: an important application of photogrammetry is drawing up topographical maps and plans from aerial photography. It is generally applied to the base level survey maps of a country (at scales that vary, depending on region, over the range 1/5000 to 1/200,000) and to the surveying that is required for civil engineering work, water works, town planning, land registry, etc.
3. Human scale applications: here the idea is to obtain three-dimensional measurements of objects or extents lying in the range a few kilometers to a few centimeters. There are numerous fields of application such as: inspecting works of art; determining the volumes of mines and quarries; applications associated with the automobile industry (e.g. the geometrical characteristics of bodywork), hydraulics (waves, water movements), plotting trajectories and studying the travel of vehicles, analyzing traffic and accident reports, drawing up plans for computer-assisted design (CAD) techniques, quality control, machining, establishing dimensions, and architectural and archeological surveying.
4. Applications at microscopic scale: here the idea is to obtain three-dimensional measurements of objects ranging in size from a few millimeters to a few micrometers. Fields of application include: biology, microgeology, micromechanics, etc.
The present invention relates more particularly, but not exclusively, to applications in the third category, i.e. applications at a “human” scale.
In this category of applications, various systems and methods have been proposed in the prior art. Without being exhaustive, the main solutions are summarized below.
There exist systems provided with one or more cameras positioned on robots or on articulated arms. With systems of that type it is indeed possible to achieve good accuracy, however they generally present rather numerous drawbacks that can be summarized as follows: the equipment is heavy and difficult to transport, its range of action is small, its cost is high, the system can only be used by specialists since it is complex to operate, and image processing is laborious and subject to error.
There exist systems that make use of optical targets. Optical targets are placed after metrology on the object or in the space that is to be modelled so as to facilitate subsequent processing using them as references, and then the positions of the camera(s) is/are measured for each picture taken. By way of example, such systems are available from the suppliers “Blom Industry” and “GSI”. Such systems give good accuracy and lead to fewer processing errors than systems of the above type. Nevertheless, that does not mean they are free from drawbacks, and in particular they suffer from the following: they are difficult and lengthy to implement since it is necessary to install expensive targets, the field of measurement is limited to the area where the targets have been installed, and once again that type of system is suitable for use only by specialists since it is complex to operate.
There exist systems that make use of light of multichromatic characteristic. One or more cameras are placed at locations that have been properly identified in advance. Chromatic light is projected on the object (several beams of different colors). That technique is limited in practice to objects of small dimensions only. Similarly, in practice, its use is restricted to closed premises. Only the shape of the object can be determined. Because of the characteristics inherent to the method, the colors of the object are spoilt by the processing. The processing is complex, and as a result that type of system is for use by specialists only.
There exist systems and techniques for image processing that rely on manual intervention to identify “remarkable” points. That solution is entirely software-based. Several photographs are taken of an object at various different angles by means of a camera. The images are then digitized and processed by suitable specific software. It is then necessary to identify manually on each photograph a maximum number of remarkable points that are common to all of the photographs, and, a priori, that operation is time consuming. The software then makes use of said points to position each photograph and to generate a three-dimensional image. That method presents a major advantage: pictures can be taken easily and quickly. However it is not free from drawbacks, and in particular it requires manual preprocessing that is laborious, as mentioned above, and the resulting accuracy is poor, as is inherent to any manual processing.
There also exist image processing systems and techniques that rely on tracking “remarkable” points. In that case also, specialized software is required. A space or an object is filmed using a video camera. Once the film has been digitized, remarkable points are selected in a particular image of the film. The same remarkable points are then identified in each image by a software method known as “point tracking”. An image processing algorithm then enables the software to determine the position of each focal plane for each image and thus to make a three-dimensional model of the universe as filmed. The main advantage of that method is similar to the advantage of the preceding method: pictures are taken quickly and easily. Its disadvantages are likewise similar: lengthy image preprocessing, manual intervention required to eliminate errors, and poor accuracy.
There exist picture-taking systems with a rotary turntable. An object of small size (typically a few tens of centimeters) is placed on the turntable, which is caused to rotate at constant speed. A stationary camera is placed outside the turntable and films the object. Although that system presents genuine advantages: pictures are taken quickly and easily, image processing is simple, and accuracy is good, it can nevertheless be applied only to objects that are small in size.
Finally, mention can be made of picture-taking systems which include “mechanical” knowledge concerning the locations of the images. For that type of system, the location of the camera is known by means of a mechanical device on a rail or the like. In that case also, the system presents genuine advantages: pictures are taken quickly and easily, image processing is simple, and accuracy is good. Nevertheless like the preceding system it can only be applied to objects of small size. In addition, it turns out to be lengthy and expensive to implement.
From the above, the main characteristics of the prior art systems and methods mentioned above can be summarized as follows.
Firstly, it can clearly be seen that three-dimensional measurement using images takes place in two stages.
The first stage consists in image acquisition proper.
The second stage, generally referred to as “photogrammetric restitution” consists in arranging the recorded images appropriately and in applying an image processing method to said arrangement (which method may be of the mechanical and/or computer type), with one of the main purposes thereof being to establish the dimensions and the positions of the objects that have been photographed. By extension, it is then possible to make a three-dimensional model of the objects.
Systems and methods exist which enable good accuracy to be achieved concerning the positions of the photographs, but in the present state of the art, those methods are expensive, requiring complex infrastructure, and therefore difficult to implement (and indeed relying on specialists) and there are constraints on the size of the object that is to be modelled.
There also exist systems and methods that extrapolate image position information directly from image processing. Implementation is simple, but the accuracy of the resulting data generally turns out to be too coarse, and manual intervention is also generally needed. As a result, processing is slow and subject to error.
As a general rule, the resulting difficulties are such that, in practice, use is made of systems based on laser telemetry. Such systems have the advantage of measuring the dimensions of elements constituting objects, but they also have the unavoidable drawback of being too coarse compared with traditional images and of being incapable of processing color information, specifically because laser sources are used.