By understanding front-end OpenGL rendering principle (1)

 

A, OpenGL

OpenGL, is a set of 3D graphics drawing API, but it could also be used to draw the 2D object. OpenGL has a large set of functions that can be used in the operating model and pictures, usually written in OpenGL graphics library is a maker. We buy graphics cards support a particular version of OpenGL.

The figure is made using OpenGL rotating cube.

Second, the principle of rendering

2.1 rendering pipeline

In OpenGL, everything in a 3D space, and our screens and windows are, so you need to convert 2D OpenGL 3D coordinates into 2D coordinates, and to do it is in OpenGL rendering pipeline ( graphics pipeline).

Rendering pipeline can be divided into two parts: the first converting 3D coordinates into 2D coordinates; converting the second portion into the real coordinates of the 2D pixels.

Shader 2.2

Generally, a rendering pipeline converting a set of 3D coordinates into 2D colored pixel on the screen to go through a lot of steps. The next step is output as an input, all steps are highly specific, each step has a specific function, and can easily be performed concurrently. There are thousands of graphics processing core to quickly process the data in the rendering pipeline, in which each step is processed by a plurality of small programs running on the GPU, and these small programs called shader program ( shader).

Some of these shaders are configurable, developers can configure their own shaders on demand to replace what already exists, which allows us to more freely and fine-grained control over the rendering process. At the same time, because they run on the GPU, we gave the GPU retains the precious time, in the usual development, we have to take advantage of GPU rendering software to improve performance.

GLSL shader typically use to write, stands for OpenGL Shading Language.

For example 2.3

The diagram illustrates the steps of an abstract rendering pipeline, where blue is we can inject own shaders.

We found by the figure, it is necessary to render the vertex data into a full pixel to go through many steps, then we simple explanation of each step and the code.

We pass a set of 3D coordinate data of the triangle can be composed in the rendering pipeline, i.e., the set of data vertex data. Vertex data is a collection of vertices, the vertex is a collection of 3D coordinates.

The first step is the rendering pipeline vertex shader (Vertex Shader). We passed here is a simple vertex, vertex shader allows us to do some basic processing operations, such as vertex attributes.

In the initial stage of the assembly, i.e. Shape Assembly stage, from the vertex shader vertices forming the shape of an original output. In the present embodiment, the apex is formed in a triangular output.

From the initial stage to the mounting geometry shader stage, we can form a new pattern by diverging another vertex, the present embodiment is formed in a second triangle.

In Tessellation Shader stage, the stage of a prototype drawing the analysis further divided into several small prototype of FIG. In the present embodiment can be formed more triangles to create a more flat, smooth environment. That might be difficult to understand, we combine the following figure further elaboration, it is the role of the subdivision surface shaders.

The next stage is a subdivision surface shader rasterization stage (Rasterzation stage), to make a final prototype mapping and this stage would render the corresponding pixel on the screen, forming fragment, fragment shader for the next stage of use .

Fragment shader main mission is to calculate the final color of a pixel, at this stage we can use some advanced OpenGL effects. Fragment shader typically contains a plurality of 3D data interface, including lighting, shading, color and the like.

When all corresponding colors are determined, the final prototype will be passed in the last step, we call Alpha test and blending stage. This phase will determine the appropriate depth of such an object may be an object behind the other, that it is possible to use other colors; or if the object is blocked, may be cut.

As described above, we can see the entire rendering pipeline steps and logic are very complex, which contains a number of steps may be varied, but we generally only operate Vertex Shader and fragment shader, we other directly shader use the default. In the actual OpenGL programming, we at least need to define a Vertex Shader and Fragment shader. (It should be noted that, prior to OpenGL 3.1 version includes fixed line, from the 3.1 version, fixed line deleted from the core, so we have to use shaders to work).

Third, the summary

This article first article for this series of articles, briefly explain some of the principles of OpenGL, the follow-up article will add a new code analysis, including shaders (Shader), texture (Textture), deformation (transformation), the coordinate system (Coordinate systems ), camera (camera) and so on.

Author: Cui Di

Guess you like

Origin www.cnblogs.com/yixinjishu/p/11269220.html