1. Statement of the Technical Field
The inventive arrangements concern registration of two-dimensional and three dimensional image data, and more particularly methods for visual interpretation of registration performance of 2D and 3D image data. This technique is used as a metric to determine registration success.
2. Description of the Related Art
Conventional electro-optical (EO) sensors have long been used for collection of such image data and generally produce two dimensional data. Such data generally corresponds to a projection of the image onto a planar field which can be entirely defined by an x and y coordinate axis. More recently, there has been a growing interest in three-dimensional imaging data. For example, LIDAR systems use a high-energy laser, optical detector, and timing circuitry to generate three-dimensional point cloud data. Each point in the 3D point cloud is spatially analogous to the pixel data generated by a digital camera, except that the 3D point cloud data is arranged in three dimensions, with points defined at various locations in a three dimensional space defined by an x, y, and z coordinate axis system. One major difference is that the lidar is range data whereas the 2D EO data has both position and intensity information. However, there is a mode whereas the lidar sensor can dwell thus creating an intensity ‘image’. It should be noted that this mode is not needed to accomplish the overlapping of the two data types described in this patent for determining data alignment or registration.
Point-cloud data can be difficult to interpret because the objects or terrain features in raw data are not easily distinguishable. Instead, the raw point cloud data can appear as an almost amorphous and uninformative collection of points on a three-dimensional coordinate system. Color maps have been used to help visualize point cloud data. For example, color maps have been used to selectively vary a color of each point in a 3D point cloud as a function of the altitude coordinate of each point. In such systems, variations in color are used to signify points at different heights or altitudes above ground level. Notwithstanding the use of such conventional color maps, 3D point cloud data has remained difficult to interpret.
It is advantageous to combine 2D EO imaging data with 3D point cloud data for the same scene. This process is sometimes called data fusion. However, combining the two different sets of image data necessarily requires an image registration step to align the points spatially. Such image registration step is usually aided by metadata associated with each image. For example, such metadata can include 1) orientation and attitude information of the sensor and 2) latitude and longitude coordinates associated with the corner points of the image, and 3) in the case of point cloud data, the raw x, y, and z point locations for the point cloud data.
The 2D to 3D image registration step can be difficult and time consuming because it requires precise alignment of the EO and LIDAR data acquired by different sensors at different data collection times and different relative sensor positions. Moreover, the point cloud data is usually a different format as compared to the EO image data, making for a more complex registration problem. Various registration schemes have been proposed to solve the foregoing registration problem. However, visual interpretation of the resulting registered EO and LIDAR data often remains difficult for human analysts. One reason for such difficulty is that, even after registration and fusion of the two types of imaging data, the three-dimensional LIDAR point cloud will often appear to float above a flat two dimensional plane representing the two-dimensional image data. This creates two noteworthy problems. In particular, it makes it more difficult for a person to visualize the scene being represented by the fused image data. This occurs because it is can be difficult to comprehend how the point cloud data fits into the two-dimensional image. The same effect also makes it more difficult to evaluate how well the registration process has worked. With the three-dimensional point cloud data appearing to float above a flat two-dimensional surface, it is difficult for a human to judge how well the various features represented by the point cloud (e.g. structures, vehicles) align with corresponding features in the two-dimensional image (e.g. building outlines or footprints, and roads). Regardless, of the particular registration scheme selected, it is useful to evaluate the performance of the result.