3D Game Development with LWJGL 3 Chapter 4: Rendering

In this chapter, we will learn about the process of rendering a scene with OpenGL. If you are used to using the old version of OpenGL based on fixed-function, you may be surprised after reading this chapter why it is so complicated, and think that drawing a simple graphic on the screen does not require so many concepts and codes. . In fact it would be simpler and more flexible to do so. Modern OpenGL allows you to consider one problem at a time, making it more logical when managing code and processes.

This series of steps that ends 3D rendering and displays it on the 2D screen is called the graphics pipeline. The first version of OpenGL used a model called the fixed pipeline. This model uses a set of steps that define a fixed series of operations in the rendering process, and the programmer is limited by the set of functions available at each step. Therefore, the rendering results and operations will be limited by the API itself (for example: "set fog" or "add light", the implementation of these function methods is fixed and cannot be changed).

The graphics pipeline consists of the following steps:
graphics pipeline
OpenGL 2.0 introduces the concept of programmable pipeline. In this model, the different steps that make up the graphics pipeline can be controlled or programmed using a set of specific schemes called shaders. The following figure depicts a simplified version of the OpenGL programmable pipeline:

Insert image description here
Rendering starts with an input of a list of vertices in the form of Vertex Buffers. Vertices are data structures used to describe points in 2D or 3D space. By specifying X, Y, Z coordinates, you can describe a point in 3D space. A vertex buffer is another data structure that uses a vertex array to pack all the vertices that need to be rendered so that the information can be used by shaders in the graphics pipeline.

These vertices are processed by a vertex shader which is mainly used to calculate the projected coordinates of each vertex in screen space. This shader is also capable of producing other color or texture related outputs, but is mainly used to project the vertices into screen space. and spawn point.

The geometry processing stage consists of the vertex shader converting the vertices into triangles, in the order in which they are stored, and using different model groupings to transform them. The reason for using a triangle is that it is similar to the basic working unit of a graphics card. It is a simple geometric shape that can be combined and transformed to build complex 3D scenes. This stage can also group vertices using specific shaders.

The rasterization stage clips the triangles generated by the previous stage and converts them into pixel-sized fragments.

These fragments are used by the fragment shader during the fragment processing stage to generate pixels, assigning them the final color written to the framebuffer.

It's important to remember that 3D graphics cards are designed to parallelize all of the above operations. Input data can be processed in parallel to generate the final scenario.

Now let's write the first shader program using the GLSL language (OpenGL Shading Language) based on ANSI C. First create a file named "vertex.vs" in the resources directory and write the following code:

#version 330

layout (location=0) in vec3 position;

void main()
{
    
    
    gl_Position = vec4(position, 1.0);
}

The first line indicates the version of the GLSL language used. The following table represents the relationship between GLSL versions and OpenGL versions (Wikipedia: https://en.wikipedia.org/wiki/OpenGL_Shading_Language#Versions ):

GLS Version OpenGL Version Shader Preprocessor
1.10.59 2.0 #version 110
1.20.8 2.1 #version 120
1.30.10 3.0 #version 130
1.40.08 3.1 #version 140
1.50.11 3.2 #version 150
3.30.6 3.3 #version 330
4.00.9 4.0 #version 400
4.10.6 4.1 #version 410
4.20.11 4.2 #version 420
4.30.8 4.3 #version 430
4.40 4.4 #version 440
4.50 4.5 #version 450

The second line specifies the input format for this shader. The data in the OpenGL buffer can be any data we want, that is, the language does not force you to pass a specific data structure using predefined semantics. From the shader's perspective, it expects to receive a buffer containing data. This data can be a location, or a location that contains some additional information, or anything else we want. Vertex shaders only accept floating point arrays. When the buffer is filled, the buffer chunks that will be processed by the shader are defined.

So it starts with turning those blocks into something meaningful to us. In this case, starting from location 0, we expect to receive a vector consisting of 3 attributes (x, y, z): (location=0) in vec3 position.

Shaders have a main code block similar to other C programs. In this case, this block of code is very brief. It simply returns the position information received in the output variable gl_Position without any conversion. Now the question comes: why is this vector with 3 attributes converted into a vector with 4 attributes (vec4)? This is because gl_Position uses the same coordinates as vec4 and it expects a result in vec4 format, that is, it expects data in (x, y, z, w) format, where w represents an extra dimension. In subsequent chapters, you will see that many operations are based on vectors and matrices, so an extra dimension needs to be added. Without this dimension, some of these operations cannot be combined with each other. For example, it is not possible to combine rotation and translation operations (if you want to learn more about this, this extra dimension allows us to combine affine and linear transformations. In the book "3D Math Primer for Graphics and Game Development" by Fletcher Dunn and Ian Parberry You can learn more in ,).

Create a file called "fragment.fs" (the extension means Fragment Shader) to write our first fragment shader:

#version 330

out vec4 fragColor;

void main()
{
    
    
    fragColor = vec4(0.0, 0.5, 0.5, 1.0);
}

The structure of this code is very similar to a vertex shader. In this case we set a fixed color for each fragment.

The output variable is defined on the second line and is set to vec4 fragColor.

Now that the shader is created, follow the steps below to use the shader we created:

1. Create an OpenGL program
2. Load the code files for the vertex and fragment shaders
3. For each shader, create a new shader program and specify its type (vertex, fragment)
4. Compile the shader
5. Convert Shader attached to program
6. Link program

Eventually the shader will be loaded into the graphics card and we can use it by referencing the identifier (program identifier).

package org.lwjglb.engine.graph;

import static org.lwjgl.opengl.GL20.*;

public class ShaderProgram {
    
    

    private final int programId;

    private int vertexShaderId;

    private int fragmentShaderId;

    public ShaderProgram() throws Exception {
    
    
        programId = glCreateProgram();
        if (programId == 0) {
    
    
            throw new Exception("Could not create Shader");
        }
    }

	// 顶点着色器
    public void createVertexShader(String shaderCode) throws Exception {
    
    
        vertexShaderId = createShader(shaderCode, GL_VERTEX_SHADER);
    }

	// 片段着色器
    public void createFragmentShader(String shaderCode) throws Exception {
    
    
        fragmentShaderId = createShader(shaderCode, GL_FRAGMENT_SHADER);
    }

	// 创建着色器
    protected int createShader(String shaderCode, int shaderType) throws Exception {
    
    
        int shaderId = glCreateShader(shaderType);
        if (shaderId == 0) {
    
    
            throw new Exception("Error creating shader. Type: " + shaderType);
        }
		// 获取着色器代码,并编译 
        glShaderSource(shaderId, shaderCode);
        glCompileShader(shaderId);

        if (glGetShaderi(shaderId, GL_COMPILE_STATUS) == 0) {
    
    
            throw new Exception("Error compiling Shader code: " + glGetShaderInfoLog(shaderId, 1024));
        }
		// 将着色器附加至OpenGL程序中
        glAttachShader(programId, shaderId);

        return shaderId;
    }

	// 链接
    public void link() throws Exception {
    
    
    	// 链接程序
        glLinkProgram(programId);
        if (glGetProgrami(programId, GL_LINK_STATUS) == 0) {
    
    
            throw new Exception("Error linking Shader code: " + glGetProgramInfoLog(programId, 1024));
        }
		// 释放
        if (vertexShaderId != 0) {
    
    
            glDetachShader(programId, vertexShaderId);
        }
        if (fragmentShaderId != 0) {
    
    
            glDetachShader(programId, fragmentShaderId);
        }
		// 验证
        glValidateProgram(programId);
        if (glGetProgrami(programId, GL_VALIDATE_STATUS) == 0) {
    
    
            System.err.println("Warning validating Shader code: " + glGetProgramInfoLog(programId, 1024));
        }

    }

    public void bind() {
    
    
        glUseProgram(programId);
    }

    public void unbind() {
    
    
        glUseProgram(0);
    }

    public void cleanup() {
    
    
        unbind();
        if (programId != 0) {
    
    
            glDeleteProgram(programId);
        }
    }
}

The constructor of the ShaderProgram class creates a new OpenGL program and provides methods for adding vertex and fragment shaders. These shaders are compiled and attached to OpenGL programs. When all shaders are attached, call the link method, linking all the code and verifying that everything worked correctly.

When the shader program linking is complete, the compiled vertex and fragment shaders can be released (call the glDetachShader method).

Regarding verification, this is achieved by calling the glValidateProgram method. This method is mainly used for debugging purposes and must be removed in a production environment. This method tries to validate if the shader is correct given the current OpenGL state, and checks that the shader executable program can be executed in the current OpenGL state. The information is translated into the above sentence). This means that in some cases, even if the shader is correct, validation may fail because the current state is not complete enough to run the shader (some data may not have been loaded yet). So we only need to print the error message instead of throwing an exception and judging the program to have failed.

The ShaderProgram class also provides a method for activating the program for rendering (bind), and a method for stopping rendering (unbind). Finally, a cleanup method is provided to release all resources when they are no longer needed.

This method also needs to be added to the IGameLogic interface:

void cleanup();

This method is called at the end of the game loop, so modify the run method of the GameEngine class:

@Override
public void run() {
    
    
    try {
    
    
        init();
        gameLoop();
    } catch (Exception excp) {
    
    
        excp.printStackTrace();
    } finally {
    
    
        cleanup();
    }
}

protected void cleanup() {
    
    
    gameLogic.cleanup();                
}

Now, you can use the shader to display a triangle in the init method of the Renderer class. First, create the shader program:

private ShaderProgram shaderProgram;

public void init() throws Exception {
    
    
    shaderProgram = new ShaderProgram();
    shaderProgram.createVertexShader(Utils.loadResource("/vertex.vs"));
    shaderProgram.createFragmentShader(Utils.loadResource("/fragment.fs"));
    shaderProgram.link();
}

Before doing this, create a utility class that provides methods for retrieving the contents of files on the classpath. This method is used to retrieve the code content of the shader.

Now we can define a triangle using an array of floats. Create an array of floats that will define the vertices of the triangle. As you can see, this array has no structure. Likewise, OpenGL does not know the structure of the array, it is just a string of floating point numbers:

float[] vertices = new float[]{
    
    
     0.0f,  0.5f, 0.0f,
    -0.5f, -0.5f, 0.0f,
     0.5f, -0.5f, 0.0f
};

The following figure depicts what the triangle looks like in the coordinate system:
triangle
With the coordinates, you need to store it into the graphics card and tell OpenGL the corresponding data structure. Now let's introduce two important concepts: Vertex Array Object (VAO) and Vertex Buffer Object (VBO). If you don’t understand the following code, please remember that in the end what we need to do is to send the required data of the drawn model object to the video memory. When the data is finished storing, it gets an identifier that is referenced when drawing later.

Let's start with the Vertex Buffer Object (VBO). A VBO is simply a memory buffer stored in video memory that stores vertices where the arrays used to model the triangles will be transferred. As mentioned before, OpenGL does not know everything about the data structure. In fact, it can not only contain coordinates, but also save other information such as texture, color, etc.

A Vertex Array Object (VAO) is an object containing one or more VBOs, often referred to as a property list. Each attribute list can hold one type of data, such as position, color, texture, etc. You can store any data you want in each list.

VAO is like a wrapper, used to group the definition of data to be stored in the graphics card. When a VAO is created, an identifier is obtained. When this identifier is used, the identifier and its containing elements are rendered through the definition specified during creation.

Now back to the code. The first thing that must be done is to store the floating point array into a FloatBuffer. The main reason is that the C language-based OpenGL library must be connected. Therefore, it is necessary to convert the floating point array into a form that can be managed by the library.

FloatBuffer verticesBuffer = MemoryUtil.memAllocFloat(vertices.length);
verticesBuffer.put(vertices).flip();

Use the MemoryUtil class to create a buffer in off-heap memory so that the OpenGL library can access it. After storing the data (using the put method), you need to use the flip method to reset the buffer's position to 0 (that is, the write to the buffer is completed). Remember, Java objects are allocated in a space called the heap. The heap is a large piece of memory reserved in the JVM process memory. Native code cannot access the memory stored in the heap (the machine allows Java to call native code through JNI, but it cannot access heap memory). The only way to share memory data between Java code and native code is to allocate memory directly in Java.

For older versions of LWJGL, there are several points that need to be emphasized. You may have noticed that the BufferUtils utility class is not used to create buffers, but the MemoryUtil class is used. Because in fact BufferUtils is not efficient and only maintains backward compatibility. LWJGL3 proposes two buffer management methods as alternatives:

  • Auto managed buffers: Buffers automatically reclaimed by Garbage Collector (GC). These buffers are primarily used for short-term operations, or data that is transferred to the GPU and does not need to exist in process memory. Implemented by the org.lwjgl.system.MemoryStack class.
  • Manually managed buffers: In this case, the buffer needs to be released carefully when finished. These buffers are used for long operations or large amounts of data. Implemented by MemoryUtil class.

Information can be found here: https://blog.lwjgl.org/memory-management-in-lwjgl-3/ .

In this case, data is sent to the GPU so that automatically managed buffers can be used. However, large amounts of data may be saved later, so manual buffer management may be required. So this is why you use the MemoryUtil class to release memory in the finally code block. In the next chapter you will learn how to use automatic buffer management.

Create the VAO and bind it:

vaoId = glGenVertexArrays();
glBindVertexArray(vaoId);

Then create the VBO, bind it and put the data:

vboId = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, verticesBuffer, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);

Here's the most important part: we need to define the structure of the data and store it in the VAO's attribute list:

glVertexAttribPointer(0, 3, GL_FLOAT, false, 0, 0);

Parameter meaning:

  • index: Specifies the location where the shader expects this data
  • size: Specify the number of members in each vertex (from 1 to 4). This case uses 3D coordinates, so it is 3
  • type: specifies the type of each member in the array, in this case floating point
  • normalized: specifies whether the value needs to be normalized
  • stride: Specifies the byte offset between consecutive generic vertex attributes (explained later)
  • offset: Specifies the offset of the first member in the buffer

After the VBO is completed, you need to unbind it and VAO (bind to 0)

// Unbind the VBO
glBindBuffer(GL_ARRAY_BUFFER, 0);

// Unbind the VAO
glBindVertexArray(0);

After completing this operation, the off-heap memory allocated by the FloatBuffer must be freed, which is done by manually calling the memFree method, because Java garbage collection does not clean up the off-heap allocation.

if (verticesBuffer != null) {
    
    
    MemoryUtil.memFree(verticesBuffer);
}

This is all the code contained in the init method. The data is already on the graphics card and ready to be used. Just modify the render method and use it for every render step in the game loop.

public void render(Window window) {
    
    
    clear();

    if (window.isResized()) {
    
    
        glViewport(0, 0, window.getWidth(), window.getHeight());
        window.setResized(false);
    }

    shaderProgram.bind();

    // Bind to the VAO
    glBindVertexArray(vaoId);

    // Draw the vertices
    glDrawArrays(GL_TRIANGLES, 0, 3);

    // Restore state
    glBindVertexArray(0);

    shaderProgram.unbind();
}

As you can see, you just need to clear the window, bind the shader program, bind the VAO, draw the vertices stored in the VBO associated with the VAO, and finally restore the original state.

You also need to add a cleanup method to the Renderer class to release the acquired resources.

public void cleanup() {
    
    
    if (shaderProgram != null) {
    
    
        shaderProgram.cleanup();
    }

    glDisableVertexAttribArray(0);

    // Delete the VBO
    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glDeleteBuffers(vboId);

    // Delete the VAO
    glBindVertexArray(0);
    glDeleteVertexArrays(vaoId);
}

Follow all the above steps to write a program and you will see the following result:
triangle demo
You might think that this would not make the top ten games list, but of course it does. You may also feel that all the above steps are just to draw a boring triangle. But remember, we are introducing key concepts, and to do more complex things, you need these basic structures to prepare. Please be patient and read the following chapters.

Code for this chapter

The official github code has been cloned to gitee
https://gitee.com/CrimsonHu/lwjglbook/tree/master/chapter04

Guess you like

Origin blog.csdn.net/m0_37942304/article/details/108057585