OpenGL entry terms and concepts

If you want to learn OpenGL, first clarify these concepts. If you don't understand it, you have no way to start and update while learning.

Model: Or called scene object, which is an object created from geometric primitives.

Rendering: is the process by which the computer creates the final image based on the model.

Shader: A special function performed by graphics hardware (graphics card) devices. For example, when drawing, we often have such a process, first draw the line draft with a pencil, and then color it. The shader is this working process. Shaders are usually divided into two types:
(1) Vertex shader This is a function that tells the computer how to draw a line-how to process data such as vertices and normals.
(2) Fragment shader This is a small function that tells the computer how to color-how to deal with the effects of light, shadow, occlusion, environment, etc. on the surface of the object, and finally generate an image.

Rasterize: It is the process of converting vertex data into fragments. It has the function of converting images into images composed of rasters. The characteristic is that each element corresponds to a pixel in the frame buffer. It is the process of converting vector graphics into pixels. The pictures displayed on our screens are all made up of pixels, while three-dimensional objects are made up of points, lines and surfaces. To make dots, lines and surfaces into pixels that can be displayed on the screen, the process of Rasterize is needed.

Rendering pipeline: It is a series of processing processes that convert the application data into the final rendered image. To put it bluntly, it is the pipeline. Come on, I will process it step by step according to the pipeline.

Vertexs: Points are the foundation of everything. OpenGL provides a series of functions to draw a point.

Primitives: points, lines, areas, etc., using glPointSize(), glLineWidth(), glLineStipple() and glPolygonStipple() functions can select the rasterization dimension and mode of the primitives. In addition, you can also use glCullFace(), glFrontFace() and glPolygonMode() to control the different rasterization effects on the front and back of the polygon.

Fragments (fragments, also called fragments): The primitives are appropriately cropped, the color and texture data are adjusted accordingly, and the relevant coordinates are converted to window coordinates. Finally, rasterization converts the cropped primitives into fragments

Pixel: the pixels that can be drawn on the screen. There are several functions to save and convert the pixels. The glPixelStore series of functions are used to save the pixels in the memory. The glPixelTransfer series of functions and glPixelMap series of functions are used for how pixels are processed before being written to the frame buffer. glDrawPixels() defines a pixel rectangle. Use glPixelZoom() to achieve pixel scaling.

The relationship between vertices, primitives, and fragments: geometric vertices are combined into primitives (points, lines or polygons), then primitives are combined into fragments, and finally fragments are converted into pixel data in the frame buffer. The primitives are appropriately cropped, the color and texture data are adjusted accordingly, and the relevant coordinates are converted to window coordinates. Finally, rasterization converts the cropped primitives into fragments.

Cropping: Repair and repair, and the processing of points, line segments and polygons is slightly different when cropping. For points, either keep the original state (inside the clipping body) or be cut off (outside the clipping body). For line segments and polygons, if the part is outside the clipping volume, new geometric vertices need to be generated at the clipping point. For polygons, it is also necessary to add complete edges between the newly added vertices. Regardless of whether the line segment or the polygon is clipped, the new geometric points need to be given boundary flags, normals, colors, and texture coordinate information.

Bitmap: A bitmap is a rectangle of 0 and 1 with a specific fragment mode. Each fragment has the same related data. Can be defined with glBitmap().

Texture storage: Texture mapping is to map a specified part of the texture image to each primitive. Each fragment (fragment) has a texture coordinate attribute, the coordinate corresponds to the texture image coordinate, and the color value of the position of the texture image is obtained to modify the RGBA color of the fragment, thereby completing the mapping process.

Fog: The rasterized fragments have the color after texture mapping. You can use the fusion factor to re-fuse the fog color. The size of the fusion factor is determined by the distance between the viewpoint and the fragment.

Guess you like

Origin blog.51cto.com/14207158/2535084