OpenGL Learning (Four) - Creating a triangle

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/qq_39030818/article/details/102603340

LearnOpenGL——https://learnopengl-cn.github.io/

Three terms:

  • Vertex array objects: Vertex Array Object, VAO
  • Vertex buffer objects: Vertex Buffer Object, VBO
  • Index buffer object: Element Buffer Object, EBO or Index Buffer Object, IBO

When referring to these three things, you may use the full name, may also use the abbreviation, consistent translation of the original time and maintained. Since there is no word in English as the interval, the full name of part of the Chinese may not be easy note. But remember, abbreviations and Chinese refer to the full name of a thing.

In OpenGL, that everything in 3D space, and the screen and the window is 2D array of pixels, which results in most of the work is on the OpenGL 3D coordinates into 2D pixels adapt your screen . 3D coordinates into 2D coordinates of the process by the OpenGL rendering graphics pipeline (Graphics Pipeline, mostly translated pipeline, actually refers to a bunch of original graphic data via a pipeline through various changes during the process eventually appear on the screen process) management. Graphics rendering pipeline can be divided into two main parts: the first part to convert your 2D coordinates 3D coordinates, and the second part is the 2D coordinates into the actual color of the pixel. For this tutorial, we will briefly discuss the graphics rendering pipeline, and how it can create some pretty pixels.

2D and pixel coordinates are different, 2D coordinate an accurate representation of a point in 2D space, which is an approximation of the 2D pixel point, is limited 2D pixel of your screen / window resolution.

Graphics rendering pipeline to accept a set of 3D coordinates, and then transform them into 2D colored pixels on your screen output. Graphics rendering pipeline can be divided into several stages, each stage will be the output of a previous stage as input. All these stages are highly specialized (which has a specific function), and is easily performed in parallel. It is because they have the characteristics of parallel execution, most cards today have thousands of small processing cores, each of them on the GPU as a (pipelines) stage run their own small program, so fast graphics rendering pipeline processing your data. These small programs, called shaders (Shader).

Some shader allows developers to configure their own, which allows us to write their own shaders to replace the default. So that we can more closely control the graphics rendering in a particular part of the pipeline, but also because they run on the GPU, so they can give us save valuable CPU time. OpenGL Shading is written in OpenGL Shading Language (OpenGL Shading Language, GLSL) of

First, we pass an array of three coordinates as the 3D graphics rendering pipeline inputs used to represent a triangle, the vertex data array is called (Vertex Data); vertex data is a set of vertices. A vertex (Vertex) is a collection of 3D coordinate data. The vertex data is vertex attributes represented (Vertex Attribute), it can contain any data we want to use, but for simplicity, we assume that each vertex of only a 3D position (Annotation 1) and consisting of a number of color values it .

When we talk about a "position" when it represents which attribute this particular place in a "space"; also stands for "space" with any of the coordinates, such as x, y, z three-dimensional coordinate system, x, y two-dimensional coordinate system, or a linear relationship between x and y on a straight line, but a two-dimensional coordinate system is a flat space is flat, and a straight line is a very thin long space.

In order to let OpenGL know what in the end is our coordinates and color values constituted, OpenGL rendering requires you to specify the type of data represented. We hope that these data rendered as a series of point? A series of triangles? Or just a long line? Make these tips, called primitives (Primitive), any instruction calls a draw primitive will pass to OpenGL. This is one of the few: GL_POINTS, GL_TRIANGLES, GL_LINE_STRIP.

The first part is a graphics rendering pipeline vertex shader (Vertex Shader), which adds a single vertex as input. The main object of the vertex shader is another 3D coordinates into 3D coordinates (explained later), while the vertex shader vertex attributes allow us to the basic process.

Primitive assembling all vertices (Primitive Assembly) a vertex shader output stage as an input (a, GL_POINTS, if, it is a vertex), and assembled into the shape of all the points designated primitives; section of the present example is a triangle.

Output primitive assembly stage is passed to the geometry shader (Geometry Shader). The geometry shader to a collection of vertices of primitives as input form, which can generate a new construction of new vertices (or other) to generate additional primitive shapes. Example, it generates another triangle.

The output geometry shader stage is passed grating (Rasterization Stage), where it will map such as the corresponding pixel on the final screen, generating fragment (Fragment) for use by the fragment shader (Fragment Shader). Before fragment shader performs cutting operation (Clipping). Cutting discards beyond all pixels outside of your view, to enhance efficiency.

OpenGL is a fragment of all the data needed OpenGL rendering a pixel.

Fragment shader main purpose is to calculate a final pixel color, which is where all advanced OpenGL effects produced. Typically, the data comprising a fragment shader of the 3D scene (such as lighting, shading, color of the light, etc.), such data may be used to calculate the final pixel color.

After all of the corresponding color values to determine the final object will be passed to the final stage, we called the Alpha test and mixing (Blending) stage. Corresponding to the depth (and template (Stencil)) detected at this stage fragment value (later speaks), use them to determine that the pixel is in front of or behind other objects, decide whether it should be discarded. This stage also checks the alpha value (alpha value defines the transparency of an object) and an object are mixed (Blend). Therefore, even if the calculated pixel color output in a fragment shader, when rendering a plurality of triangular final pixel color may be completely different.

You can see, the graphics rendering pipeline is very complex, it contains many parts configurable. However, for most cases, we only need to configure the vertex and fragment shaders on the line. Geometry shader is optional, it is usually the default shader on the line.

In modern OpenGL, we must define at least one of a vertex shader and fragment shader (no default because the GPU vertex / fragment shader). For this reason, just beginning to learn modern OpenGL can be very difficult, because before you can render your first triangle has a need to know a lot of knowledge. In the end of this section you render your final triangle, you will learn a lot of graphics programming knowledge.

Vertex input

Before you start to draw graphics, we have to enter some vertex data give OpenGL. OpenGL is a 3D graphics library, so we all coordinates are specified in OpenGL 3D coordinates (x, y, and z). OpenGL is not simply put all 3D coordinates to 2D pixels on the screen; only when OpenGL 3D coordinates on the three axes (x, y, and z) are processed only when it is in a range of -1.0 to 1.0. All coordinates will be in the so-called normalized device coordinates (Normalized Device Coordinates) the scope of the final show on the screen (coordinates are outside this range will not be displayed).

Since we want to render a triangle, we want to specify a total of three vertices, each has a 3D position. We will define them in the form of normalized device coordinates (OpenGL visible region) to a floatarray.

float vertices[] = {
    -0.5f, -0.5f, 0.0f,
     0.5f, -0.5f, 0.0f,
     0.0f,  0.5f, 0.0f
};

Since OpenGL is working in 3D space, which we are rendering a 2D triangle, z coordinates of the vertices which we will set to 0.0. This way, then each point of the triangle depth (Depth) are the same, so that it looks like the 2D.

Generally can be understood as the depth z coordinates, which represents a pixel in space and your distance, if you are far away from the other pixels can be blocked, you will not see it, it will be discarded in order to save resources.

Normalized device coordinates (Normalized Device Coordinates, NDC)

Once you have been treated vertex coordinates in the vertex shader, they should be normalized device coordinates , and is a normalized device coordinates x, y and z values in a short space of -1.0 to 1.0. Coordinates will fall outside the scope of any discarded / cut, it will not be displayed on your screen. Below you will see our triangle defined in a standardized device coordinates (ignoring the z-axis):

 

NDC

 

Different normal screen coordinates, y-axis positive direction is upward, (0, 0) are the coordinates of the center of the image, rather than the upper left corner. Eventually you want all (had transformed) coordinates in the coordinate space, otherwise they are not visible.

You will then normalized device coordinates transformed into screen space coordinates (Screen-space Coordinates), which is provided by using the data you glViewport function performs viewport transformation (Viewport Transform) completed. The resulting screen space coordinates will be converted into a fragment shader input to the fragment.

Such vertex data defined later, we will send it as a first input to a graphics rendering pipeline processing stages: a vertex shader. It will create a memory to store our vertex data, configure OpenGL how to interpret the memory on the GPU, and specify how to send it to the graphics card. Vertex shader then we will process vertex in a specified amount of memory.

We manage this memory through the vertex buffer objects (Vertex Buffer Objects, VBO), it will be in GPU memory (often referred to as memory) is stored in a large number of vertices. The benefits of using these buffer object is that we can send a large number of one-time data to the graphics card, rather than sending each vertex once. The data is sent from the CPU to the graphics card is relatively slow, so we have to try as long as possible try to send as much data at once. When the data transmitted to the memory card, the vertex shader almost immediate access to the vertex, which is a very fast process.

Vertex buffer object is in our OpenGL OpenGL objects first appeared tutorial. Like other object in OpenGL, this cushion has a unique ID, so we can use a buffer glGenBuffers function and generate a VBO Object ID:

unsigned int VBO;
glGenBuffers(1, &VBO);

There are many OpenGL buffer object type, buffer type vertex buffer objects is GL_ARRAY_BUFFER. OpenGL allows us to simultaneously bind multiple buffers, so long as they are different types of buffer. We can use glBindBuffer function to the newly created buffer bound to target GL_ARRAY_BUFFER:

glBindBuffer(GL_ARRAY_BUFFER, VBO);  

From this moment, we use any (on GL_ARRAY_BUFFER target) will be used to configure call buffer buffer (VBO) currently bound. Then we can call glBufferData function, it will copy the vertex data to the previously defined buffer memory:

glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);

glBufferData is specifically designed to copy data to the user-defined function of the current binding buffer. Its first parameter is the target buffer type: vertex buffer objects currently bound to the target GL_ARRAY_BUFFER. The second parameter specifies the size of transmission data (in bytes); with a simple sizeofto calculate the size of the vertex data on the line. The third parameter is the actual data we want to send.

The fourth parameter specifies how we want to manage a given graphics data. It has three forms:

  • GL_STATIC_DRAW: data with little or no change.

  • GL_DYNAMIC_DRAW: Data will be changed a lot.

  • GL_STREAM_DRAW: Data change every time you draw.

Location data of the triangle does not change, they remain the same each time rendering call, so it's best to use the type of GL_STATIC_DRAW. If, for example, a data buffer will be changed frequently, then the type is used GL_DYNAMIC_DRAW or GL_STREAM_DRAW, This ensures that the data in memory card portion capable of high-speed writing.

Now that we have the vertex data stored in the graphics memory, with VBO vertex buffer object management. Here we will create a vertex and fragment shaders to actually process the data. Now we have started to create them now.

Vertex shader

The vertex shader (Vertex Shader) are a few of a programmable shader. If we are going to do the rendering, then, modern OpenGL we need to set up at least one vertex and fragment shaders. We will briefly describe shaders and configure two very simple shader to draw our first triangle. We will discuss shaders in more detail in the next section.

The first thing we need to do is to use shader language GLSL (OpenGL Shading Language) vertex shader writing, and then compile the shader, so that we can use it in the program. Below you will see the source code is a very basic GLSL vertex shader:

#version 330 core
layout (location = 0) in vec3 aPos;

void main()
{
    gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);
}

You can see, GLSL looks a lot like C language. Each shader are starting to release a statement. OpenGL and 3.3 and later, the version number and the version GLSL OpenGL are matched (for example, corresponding to the version 420. GLSL OpenGL 4.2). We also made it clear that we will use core mode.

Next, use the inkeyword, to declare all input vertex attributes (Input Vertex Attribute) in the vertex shader. Now we only care about location (Position) data, so we only need one vertex attributes. GLSL has a vector data type that contains 1-4 floatcomponents, comprising a number of which can be seen from the suffix numbers. Since each vertex has a 3D coordinate, we will create an vec3input variable aPos. We also by layout (location = 0)setting the position value of the input variables (Location) you'll see later why we need this position value.

Vector (Vector)

In graphics programming, we often use this vector mathematical concepts, because it concisely express the position and orientation of any space, and it has a very useful mathematical properties. In a vector GLSL up to four components, each component value represents a coordinate space, which can be vec.x, vec.y, vec.zand vec.wto obtain. Note that vec.wcomponent, it is not expressed as a position in space (we are not dealing with 3D 4D), but with the so-called perspective division (Perspective Division). We will discuss in more detail later in the vector tutorial.

To set the output of the vertex shader, we must position data assigned to the predefined gl_Position variable, which is behind the scenes vec4type. The value of the last, we will set gl_Position main function of the output will be the vertex shader. Because our input is a 3 component vector, we must convert it to 4 components. We can put vec3the data as a vec4parameter constructor, while the wcomponent is set to 1.0f(we'll explain why later) to accomplish this task.

The current vertex shader may be that we can think of the most simple vertex shader, because we have nothing to input data processing put it passes the output of the shader. Enter the actual program where the data is normalized device coordinates are usually not, so we first must first convert them to OpenGL in the viewable area.

Guess you like

Origin blog.csdn.net/qq_39030818/article/details/102603340