This disclosure generally relates to three-dimensional (3-D) visualization systems. In particular, this disclosure relates to tagging computer-generated images of a 3-D model of an object with metadata representing the location of a virtual camera.
Image files are composed of digital data in one of many image file formats that can be rasterized for use on a computer display or printer. An image file format may store data in uncompressed, compressed, or vector formats. Once rasterized, an image becomes a grid of pixels, each of which has a number of bits to designate its color equal to the color depth of the device displaying it. It has become common practice to include camera location metadata in digital image files.
As used herein, the term “viewpoint” is the apparent distance and direction from which a virtual camera views and records an object. A visualization system allows a user to view an image of an object from a viewpoint that can be characterized as the apparent location of a virtual camera. As used herein, the term “location” includes both position (e.g., x, y, z coordinates) and orientation (e.g., look direction vector or yaw, pitch and roll angles of a virtual line-of-sight).
A 3-D visualization system may be used to display images representing portions and/or individual components of one or more 3-D models of an object within a graphical user interface. Visualization systems may be used to perform various operations with respect to the image of the object. For example, a user may use a visualization system to navigate to an image of a particular part or assembly of parts within the object for the purpose of identifying information for use in performing an inspection. During such navigation, the image observed by the user may be translated, rotated and scaled to reflect user-initiated changes to the location of the virtual camera. In addition, the image can be cropped in response to changes to the field-of-view of the virtual camera.
Navigating in 3-D visualization environments can be difficult and time consuming, often requiring a moderate level of skill and familiarity with the 3-D visualization controls. As one illustrative example, a 3-D visualization system may be used to visualize different types of aircraft being manufactured at a facility and data about these aircraft. More specifically, a 3-D visualization application running on a computer may display a computer-aided design (CAD) model of an aircraft comprising many parts. With some currently available visualization systems, filtering the extensive amount of data available in order to obtain data of interest concerning a particular part may be more difficult and time-consuming than desired. Some 3-D visualization systems require training and experience in order for the user to easily navigate through the CAD model of the aircraft. In particular, an interested user may find it difficult to remember or re-create a particular viewpoint of a CAD model of an aircraft (or other vehicle) to be displayed on a display screen.
It is often the case in working with computer graphics applications that viewpoints need to be saved and then recovered at a later date or by other people. Images of a specific scene may be stored by the user, but if a separate file containing the location data for the virtual camera is not saved at the same time, it can be difficult to return to the exact viewpoint in the 3-D environment where the image was generated. There are several types of existing applications that address the viewpoint recovery problem. Some existing 3-D visualization solutions use named virtual camera locations or have separate or proprietary approaches to recalling predefined viewpoints. The most common approaches involve storing a separate viewpoint file or integrate a thumbnail image into a custom session file. These approaches result in additional complexity (multiple files that have to be managed) and/or reduced flexibility (not able to view images and data in standard viewers).
In addition, determining the viewpoint location offset from a set of computer-generated images is often required for subsequent analysis or motion planning purposes, and can be difficult to determine by using the images alone.
It would be desirable to provide a process for adding virtual camera location data to computer-generated images, which location data could be used later to return to the viewpoint in the 3-D environment where the image was generated. It would also be desirable to provide a process for determining the relative offset between such computer-generated images.