Embodiments relate to an automatic process to optimize geometry and texture data for graphics/rendering engines by reorganizing the data to cull and batch more efficiently.
In computer graphics, real-world objects may be represented as three-dimensional (3D) geometric models 10, as shown in FIG. 1. Conventional geometric models 20 may be defined in computer code as a hierarchy of nodes, as shown in FIG. 2. A variety of node types, each with distinct attributes, may define the model. The geometric model 20 may contain a group node. Each group node may include object nodes and other group nodes. Each object node may include, but not limited to, polygons. The surface of a 3D object may be defined by a collection of polygons. Polygons may be defined by their vertices at 3D coordinates. Objects may be grouped with other objects or other groups. Some nodes may control the position or visibility of the nodes below them in the hierarchy. Textures may be bitmap images applied to the polygon. Textures may vary the color or other attributes across the surface of a polygon. As may be appreciated, 3D geometric models may have many configurations.
Referring now to FIG. 3, a conventional scene graph 30 is shown. A scene graph 30 may be a data structure that defines the logical and spatial representation of a graphical scene. It may include the positions and orientation of all individual geometric models. The hierarchy of nodes within individual model files may be considered an extension of the scene graph. The scene graph 30 may also include cameras, lights, and other parameters necessary to define a scene. Incrementally changing the position of the camera or individual models over time may create the illusion of movement within the scene.
In the scene graph 30, there may be, but not limited to, a model for building 1, a model for building 2, a model for a car, a model for tree 1 and a model for tree 2.