Shaders are software programs which are used by the rendering resources of a computer to calculate the color and shape of an object. Currently software libraries such as Open Graphics Library (OpenGL) and DirectX include shading functions in their Application Programming Interface (API). These libraries enable programmers to write programs that access hardware graphics features of a computer without having detailed knowledge of the underlying graphics hardware.
Shading languages such as OpenGL Shading Language (GLSL) are developed which are based on a high level programming language such as C or C++. These languages allow developers to avoid using assembly language or hardware-specific languages. The high-level OpenGL shading constructs are compiled to Graphical Processing Unit (GPU) machine language. Similarly, the assembly language, OpenGL Architecture Review Board (ARB) can be used from a high level programming language to develop shader programs as described further below.
There are different types of shaders such as vertex shaders, geometry shaders, and fragment (or pixel) shaders. Vertex shaders operate on datasets called vertices. Properties such as color and position of the vertices can be changed by the vertex shader. Geometry shaders operate on groups of individual vertices. Fragment (or pixel) shaders calculate the color value of individual pixels using the polygons produced by the vertex and geometry shaders. The set of vertices and/or pixels define the shape, color, and other properties of a graphics object. Generally, a shader includes all three (or at least the vertex and fragment) shaders. Code sections related to vertex shaders execute faster while code sections related to fragment shaders execute slower but produce finer results. In the following discussions, the terms shader (used alone) or shader program refer to all different variety of shaders while individual shaders are identified by their specific prefixes (vertex, fragment, geometry, etc.).
Graphics application programmers write the application in a high-level language such as C++ and write shaders in ARB or GLSL. The shader constructs are embedded as string constants in the high-level language. FIG. 1 illustrates an example of such a program written in C++. As shown, the program includes several string constants that include program statements in ARB. However, when the application requires a large number of different variants of a shader, then string constants become inconvenient because the shader variants have a lot of similar code that are copied and pasted from one to another.
The programmers also use macros to combine snippets of code. FIG. 2 illustrates an example of using macros to create shader programs in ARB. Macros such as OZ_GL_FP_MAGNIFIER are used to combine snippets of code. The resulting programs are, however, hard to debug, understand, or extend.
One alternative is to assemble shaders by programmatically concatenating strings. This approach results in C++ code that is complicated and hard to write or maintain, because assembling shaders out of lots of small string snippets is hard to read. The C++ code has to manage what operations go into the vertex shader and what operations go into the fragment shader and which variables (e.g., uniform, attribute, or varying) are needed to support them. Complexity of that task limits the use of this alternative. Furthermore, assembling strings at runtime requires making sure about all different ways that the strings may need to be combined to produce a valid program.
For instance, when a shader application has a mask, a blend mode, and lighting, it becomes too complex to include all three areas into one shader since each has its own issues to manage. Masking supports multiple inputs with different Boolean operators, bending supports operations such as gamma correction and bilinear sampling, and lighting supports multiple instances of several different kinds of light sources. It becomes too complex to take several components of an imaging operation and integrate them to make a dynamic shader.
When trying to combine different calculations together to make dynamic shaders, one main challenge would be to determine which calculations should be done by the main CPU, the vertex processor, or the fragment processor. Decisions are to be made as how often to perform different parts of the shader, e.g., once for every single pixel, once per vertex, or just once per object. Calculations should be done at a high frequency by a fragment shader if necessary, but otherwise should be done at a low frequency (by vertex shader or CPU) for better performance. One example of an existing shader program is SH. SH allows the use of a high-level program language constructs. However, the instructions written in SH, has to be specifically allocated to either vertex or fragment shaders with no flexibility to allocate instructions to each shader at runtime.
There is, therefore, a need in the art for a framework that can represent shading calculations and let a user manipulate them, even if they come from different modules of the application. It is also desirable to be able to combine different shader pieces of code together, automatically eliminating unnecessary steps and combining redundant steps. It is further desirable to have a framework to either figure out optimal frequencies automatically or allow manual control over what gets done by CPU, vertex shader, or fragment shader.