Laser distance-measuring systems measure space to such a high degree of measurement density that the resultant mass of points (often called a point cloud) can appear as a coherent scene in the manner of a pointillist painting. These systems produce sets of echoes, or point clouds, which are data sets representing points whose position, distance and other characteristics are sensed by the system.
Typically, the systems collect data in such a way as to transform raw sensor data into point data that have three position coordinates, x, y and z. The raw sensor data is expressed in spherical coordinates: an angle θ that represents the angle of rotation about a vertical axis, an angle φ that represents the angle of rotation about a horizontal axis and a range or distance ρ. The angle coordinates correspond with the movable laser-scanner or LIDAR components that determine the direction of an emitted laser pulse. These spherical coordinates are often transformed into Cartesian coordinates, which are more convenient for later operations on the data.
FIG. 1 illustrates a block diagram of an example scanning system from the prior art used to create a 3D dataset. A Field Digital Vision (FDV) module 105 includes a scanning sensor for scanning an object 100 and for sensing the position in three-dimensional space of selected points on the surface of the object 100. The FDV module 105 generates a point cloud 110 which represents the sensed positions of the selected points. The point cloud 110 can also represent other attributes of the sensed positions, such as reflectivity, surface color and texture.
A Computer Graphics Perception (CGP) module 115 interacts with the FDV 105 to provide control and targeting functions for the FDV module 105 sensor. In addition, using the point cloud, the CGP module 115 can recognize geometric shapes represented by groups of points in the point cloud 110, and the CGP module 115 can generate a CGP model 120 that represents these geometric shapes. From the CGP model 120, the CGP module 115 can generate a further model usable by computer-aided design (CAD) tools 125.
FIG. 2 illustrates the prior art FDV 105 from FIG. 1 including a scanning laser system (LIDAR) 210 that scans points of the object 100 and that generates a data signal that precisely represents the position in three-dimensional space of each scanned point. The location can be given as a set of coordinates. The coordinates can be identified in Cartesian coordinates, spherical coordinates, cylindrical coordinates or any other coordinate system. Each coordinate system has a set of defined axes, e.g. if the dataset uses Cartesian coordinates, a set of x, y and z axes are also defined. Often conversion between coordinate systems is a straightforward operation using well known conversion equations. The LIDAR data signal for groups of scanned points collectively constitutes the point cloud 110. In addition, a video system 215, preferably including both wide angle and narrow angle charge coupled device (CCD) cameras, is provided. The wide angle CCD camera of the video system 215 acquires a video image of the object 100 and provides, to the CGP 115 via a control/interface module 220 of the FDV 105, a signal that represents the acquired video image.
In response to user input relative to the signal that represents the acquired video image, the CGP 115 provides a scanning control signal to the LIDAR 210, via the control/interface module 220, for controlling which points on the surface of the object 100 the LIDAR 210 scans. More particularly, the scanning control signal provided from the CGP 115 controls an accurate and repeatable beam steering mechanism to steer a laser beam of the LIDAR 210.
In addition, the narrow angle CCD camera of the video system 215 captures the intensity of the laser return from each laser impingement point, along with texture and color information, and provides this captured information to the CGP 115. Other properties can also be measured or calculated for points or groups of points, such as a normal vector for a point or group of points, or the points can be fit to particular geometric shapes, such as lines, planes, spheres, cylinders or any other geometric shape.
The CGP 115 is constituted of a data processing system (e.g., a notebook computer or a graphics workstation) and special purpose software that when executed configures the CGP 115 data processing system to perform the FDV 105 control and targeting functions, and also to perform the CGP model generation functions.
The CGP 115 controls the scanning LIDAR 210 of the FDV 105 by providing a LIDAR control signal to the FDV 105 that controls which points of the object 100 the FDV 105 scans. User input is provided to the CGP 115 which defines what portions of the object 100 to scan, and at what resolution.
Each data point in the point cloud 110 generated by the FDV represents both distance to a corresponding laser impingement point from an FDV 105 “origin point” and the angle from the origin point to the laser impingement point. The CGP software configures the CGP 115 computer to process the data points of the point cloud 110, generated by the LIDAR 210 as a result of scanning the object 100, to display and visualize the scanned portions of the object 100. More specifically, the CGP software can configure the CGP 115 computer to recognize geometric shapes in the object 100 (“graphic perception”) and, using these recognized geometric shapes, to perform geometry construction, 3D model construction, 3D visualization, and database functions for automated acquisition or manual input of object attributes, generation of plans, sections, and dimensions, data query, and CAD interfaces, and networking options.
FIG. 3 illustrates an example of a physical arrangement of the prior art FDV 300 of FIGS. 1 and 2. The FDV 300 can be mounted on a tripod 305 or other mounting device. A notebook computer 310 can contain the CGP and can be in communication with the FDV 300 via a data cable 315 or in some other manner.