1. Field of the Invention
The present invention relates generally to three-dimensional (3D) scan data, and in particular, to a method, apparatus, and article of manufacture for dynamically creating and presenting a 3D view of a scene by combining color, brightness, and intensity from multiple scan data sources. In other words, embodiments of the invention relate to optimizing 3D scan data viewing by dynamically combining color, brightness, and intensity data.
2. Description of the Related Art
The visualization of 3D scan data in applications (e.g., the RECAP™ application or AUTOCAD™ application) can suffer from problems such as under- or over-exposure of RGB (red green blue) data, as well as the lack of texture and color information in scan intensity data (also known as reflectance, i.e., energy of the reflected laser beam). These problems can make the visual interpretation of the point cloud data by the user difficult or even impossible. The problem is usually specific to portions of the scans (e.g., very bright or very dark areas). Accordingly, what is needed is the ability to deliver better visual information to the user in real-time. To better understand these problems a description of prior art applications and visualizations may be useful.
Many computer applications are used to visualize, design, and draft real world objects and scenes. Two exemplary applications are the AUTOCAD™ application and the RECAP™ application (both available from the assignee of the present application). The AUTOCAD™ application is a commercial software application for two-dimensional and 3D computer-aided design (CAD) and drafting. The AUTOCAD™ application is used across a wide range of industries by architects, project managers, engineers, graphic designers, and other professionals to design architectural plans, designs, etc. The RECAP™ application (also known as RECAP 360™) is a reality capture and 3D scanning software application in which users can convert scan file data to a form that can be used in CAD systems, modeling systems, and/or any other type of computer design system.
Commonly, a photograph captures the real world in a 2D representation and with laser scanning, users capture the real world in 3D, like a 3D photograph. A typical representation of laser scanned data is a point cloud. Accordingly, users can capture artifacts, buildings, topographies or even an entire town with laser scanning and then attach the point cloud as a real-world reference in design work. This point cloud data is stored as millions of points. The RECAP™ application processes these massive data sets, and provides the ability to aggregate, enhance, clean, and organize point cloud data from many sources. It also allows measuring, visualizing, and data preparing for efficient use in other programs such as CAD systems (e.g., the AUTOCAD™ application), building information model (BIM) systems, solid modeling applications, etc.
Different color models may be used to represent colors in space (e.g., captured from a camera). One color model is the RGB (red, green, blue) color model. To form a color with RGB, three light beams (one red, one green, and one blue) are superimposed (e.g., by emission from a black screen or by reflection from a white screen). Each of the three beams is called a component of the color and has an arbitrary intensity (from fully on to fully off). A zero intensity for each component (0, 0, 0) provides the darkest color (no light, considered the black) and full intensity (255, 255, 255) of each component gives a white. When the intensities of all the components are the same, the result is a shade of gray, darker or lighter depending on the intensity. When the intensities are different, the result is a colorized hue, more or less saturated depending on the difference of the strongest and the weakest of the intensities of the primary colors used.
The YUV color space defines a color space in terms of one luma (Y) and two chrominance (UV) components. In this regard, the color information (UV) may be added separately to the luma (Y) information. Typically, YUV signals may be created by converting from an RGB source. Weighted values of R, G, and B are summed to produce Y (a measure of overall brightness or luminance). U and V may be computed as scaled differences between Y and the B and R values. The conversion between RGB and YUV space is known to one of ordinary skill in the art and is described in the YUV Wikipedia article found at en.wikipedia.org/wiki/YUV which is incorporated by reference herein.
However, as described above, when image data including RGB information is captured from a classical camera, there may be some missing RGB information (e.g., due to the quality of the camera/lens and/or due to under/over exposure) of certain parts of an image.
As an alternative or in addition to RGB/camera based information, laser scanners may be used to capture 3D information into a point cloud. Point cloud data/information normally consists of RGB data as well as intensity data for each point. As described above, the point cloud information is based on the reflectance—the energy of the reflected laser beam sensed at the scanner. In this regard, each point is associated or is return with a laser pulse return intensity value. Scanners identify an intensity value for each point during the capture process. While the intensity of the reflected laser beam may be strong, the signal received is based on the reflective surface such that color information and/or texture/contrast that is in the visible field (and not the infrared field) may be missing/weak. For example, the reflectance value/information may be dependent on the angle, distance, material of the reflecting surface, orientation of the surface, etc. (e.g., some shading may result in the beam). In other words, the intensity/reflectance value is a measure of point reflectivity that can vary depending upon color, surface texture, surface angle, and the environment. For example, if the orientation of the surface is not orthogonal to the laser beam source, the reflecting beam may be affected. Similarly, in another example, the material of the reflecting surface (e.g., a metal highly reflective surface vs. a fabric or a dull wood surface) can also affect the reflectance value/information.
In view of the above, both the RGB/YUV camera based data and the scanner based point cloud data may be missing information (e.g., color information, texture/contrast information, intensity information, reflectance information, etc.). Such missing information may result in an image that appears under/over exposed in certain areas and/or that is missing definition/clarity. Accordingly, what is needed is the ability to acquire/view an image with as much information/definition as possible.