OpenGL Introductory Tutorial - PT1

Since I am not born in computer graphics, I adopted some comprehensive answers from textbooks, blogs, and GPT, and tried my best as a blog to understand graphics and learn the simple function library of OpenGL! bear with me

I will gradually import OpenGL from the most basic display related knowledge

1. How does the screen work? Why can the electronic screen display content?

Under normal circumstances, our mobile phone screen and computer screen have the same working principle, but some internal hardware and parameters are different , so they can be classified and summarized as follows:

An electronic screen consists of several components:

        Liquid crystal layer: The liquid crystal layer is located in the middle of two transparent electrodes and is composed of two parallel glass or other materials. The physical characteristics of liquid crystals are that when energized, the arrangement becomes orderly, and when it is not energized, it becomes chaotic, preventing light from passing through

        Transparent electrodes: located on both sides of the liquid crystal layer, through the current to create an electric field, which is used to energize the liquid crystal

        Color filter: an optical device used to selectively transmit or block light of different wavelengths when light passes through, so as to control the color of light

        Light source: The light source is provided on the back of the electronic display screen, generally using cathode-cooled fluorescent lamps or LED lamps

So from the above information, we can know that the working principle of our electronic display is to change the alignment direction of liquid crystal molecules under the action of an electric field to generate points, lines, and surfaces, so that the light transmittance of the external light source is changed (modulated), and the display is completed. Electro-optical conversion, and then use the different excitations of the R, G, and B three primary color signals, through the red, green, and blue three primary color filter films, to complete the color reproduction in the time domain and space domain, and cooperate with the back lamp to form a picture.

2. How to display the information we input on the screen?

From the title 1, we know that from the hardware level, our display is stimulated by current to change the arrangement of liquid crystal molecules, so can we produce various vivid pictures?

        1: When we usually watch movies on the computer, we actually receive various video signals (such as broadcast TV signals, satellite signals, or video files stored on CDs, or online loading). Input interface (HDMI, USB, DP) connected to the monitor

        2: Video decoding, decoding is to restore the compressed and encoded video data to the original image or sound signal. Depending on the signal source, the display may use different decoders for decoding (including sampling, quantization, and other steps), because we When recording or making videos, in order to improve our transmission efficiency, video files usually use compression coding algorithms to compress continuous image frames, and use low-level hardware languages ​​such as assembly for coding, so there is not much redundancy.

        3: Image processing, the decoded video image is sent to the image processing unit, and the image processing unit is responsible for processing and optimizing the image to improve image quality and display effect

        4: Frame buffer, the processed image is stored in the frame buffer, also known as the frame buffer or video memory, this area is a memory area, used to store the pixel data of the image, the image frame buffer saves the current image to be displayed on the screen Each image frame is composed of a series of pixels, and each pixel contains information about its color and brightness. The frame buffer provides the ability to read, write, and update the image by storing this pixel data.

        5: Display control, according to the image data in the frame buffer, control the brightness and color of each pixel on the TV screen

        6: Screen refresh, each pixel on the TV screen refreshes the pixels on the screen according to the refresh rate cycle through the display control unit, so that the image is continuously displayed

3. Why can keyboard input be displayed on the computer screen?

        1: Keyboard input, when text is entered on the keyboard, the corresponding circuit of the keyboard is triggered, and each key has a corresponding character

        2: Electrical signal transmission, the circuit inside the keyboard will convert the corresponding key information into electrical signals, and these electrical signals are transmitted to the input interface of the computer through the connecting cable

        3: The operating system receives, receives the incoming electrical signal, and the operating system complies with the management and processing of the input signal

        4: Character encoding. After the operating system receives the electrical signal, it converts the electrical signal into corresponding characters. Character encoding is a way to map characters into digital codes, such as ASCII, Unicode, etc.

        5: Application program receiving and processing, the operating system sends characters or keys to the application program, such as a text editor, and the application program will perform corresponding text processing on the characters

        6: Graphics processing, when the application updates the text, it will notify the operating system and the graphics processing system to render graphics, convert text into graphics data, and calculate and arrange text positions, fonts, styles, etc.

        7: Frame buffer update, as above, we introduced the frame buffer to update the pixel data stored in the frame buffer

        8: Graphics card processing, the graphics card reads image data from the frame buffer, converts it into an electrical signal and sends it to the display

        9: Display output. After the display receives the electrical signal, it updates the pixels on the screen according to the signal instruction, and finally displays the corresponding text on the display.

Fourth, why do we need a graphics card?

From the above content, we know that if we just watch the video through the computer, or perform some logic processing, or display some simple content, it can be done through the frame buffer, and you only need a memory stick with a large memory, so why? Do we need to introduce a graphics card?

Starting from the simplest requirements, we first introduce some graphics knowledge to explain , assuming we need to draw a rotating cube on the screen

        1: We need to define the model data of the cube first, including vertex coordinates, normal vectors, texture coordinates, etc.

        2: Then you need to define a shader program to calculate the attributes of vertices and fragments, such as position, color, and lighting, so drawing a cube requires at least one vertex shader and fragment shader, and the vertex shader is responsible for calculating the position of vertices and transforms, the fragment shader is responsible for calculating the color of the pixel

        3: Create a frame buffer, the memory area used to store image frame data, as above

        4: Set the viewport and projection matrix. The viewport defines the size and position of the drawing area. The projection matrix defines the perspective or orthogonal projection of the scene. These parameters can determine the position and size of the cube on the screen

        5: Update the rotation transformation matrix of the cube in each frame, and different rotation speeds and directions can be achieved by modifying the matrix parameters

Let's take a look at it now. If it is handed over to the CPU to handle the above steps, it is also possible, but what about when there are thousands of rotating squares? It requires tens of thousands of repeated vertex coordinate transformations, pixel coloring and texture sampling. Since the calculation unit in the CPU is not large, that is, the ALU, this is limited by the original design intention of the computer, and the instructions of the CPU are relatively complicated . , can realize many core functions, which is equivalent to an Einstein, and the GPU is equivalent to ordinary people, used to do a lot of simple and repetitive work. It has a large number of computing units inside, so small things like matrix transformation are left to Let the GPU do it!

5. Okay, we have a graphics card, but what is rendering?

When we get the first graphics card, we can use the graphics card to play an airplane battle game by controlling the pixels of the screen. Then we need to know some knowledge about the rendering pipeline. What is rendering? ? ? What is a rendering pipeline? ?

        1: Rendering, Rendering, also known as rendering, drawing, I think we translated the word Rendering into rendering at the beginning, and more people may understand it. Rendering is display, so our rendering is to put some of our geometric data, For example, vertex coordinate calculation, lighting calculation, texture mapping, etc., are transformed into graphics that we can see with the naked eye, which is to process a bunch of data into a vivid appearance.

        2: Rendering pipeline. After rendering, a rendering pipeline is naturally needed. The rendering pipeline is the collective name for the entire rendering process. Our conventional rendering process includes:

        ① Vertex Processing Stage (Vertex Processing Stage), which is to process the input vertex coordinates, perform geometric transformation and calculation of vertex attributes, including vertex coordinate transformation, normal calculation (normal calculation is to process the vertex coordinates of the model , and obtain the normal vector of each vertex, these normal vectors can be used for lighting calculations, shadow calculations and visual effects calculations ), interpolation of color and texture coordinates, etc.

        ②Geometry Process Stage (Geometry Process Stage), to process and assemble the fixed-point geometric primitives, including triangle clipping ( used to eliminate or crop triangles outside the viewing volume, in order to improve rendering efficiency and reduce unnecessary Necessary calculation and drawing operations ), projection transformation ( projecting a three-dimensional scene onto a two-dimensional screen, including perspective projection and orthogonal projection, perspective projection is a projection method that simulates the perspective of the human eye, showing that the near is big and the far is small, while the orthogonal projection It is a projection method that does not change the size regardless of the distance from the camera ), rasterization ( converting continuous geometric shapes into discrete pixels, and determining the final color value of each pixel through pixel-by-pixel processing and interpolation calculation. This can be done in A two-dimensional image is generated on the screen and displayed through subsequent stages of the rendering pipeline ).

        ③Pixel Processing Stage (Pixel Processing Stage) processes the rasterized pixels, including performing fragment coloring, texture sampling and other operations on each pixel

        ④ Output Assembly Stage (Output Assembly Stage), which finally synthesizes the processed pixels into a rendering result, including writing pixels into the frame buffer, and applying results such as mixing, depth testing, and template testing

        

     6. Introduce OpenGL! Let's create simple graphics together!

For the above-mentioned rendering pipeline process, of course we need to use tools to control it. Common graphics interfaces include

OpenGL (Open Graphic Library), DirectX, Vulkan (lower level), Metal (for IOS)

I finally chose OpenGL for learning. If you are editing graphics rendering pipelines for Apple computers or iPhones, you need to learn Metal.

Well, since we chose OpenGL, let's start learning to use OpenGL!

       1: How to use VS to create OpenGL applications? We usually choose GLUT or GLFW. For these two tool libraries, I will write the installation method. The steps are very cumbersome. I will write it in detail. Which one to choose depends on your needs. If you are just a beginner, Just install GLUT and that's it!

                GLUT(OpenGL Utility Toolkit):

                ①: This tool simplifies the development process of OpenGL applications, and can achieve window management, event processing, basic drawing functions, timers, menus and other functions, suitable for beginners to learn. However, GLUT has stopped maintenance at present, and its functions are limited. The official website download address: http://www.opengl.org/resources/libraries/glut/glutdlls37beta.zip

                ②: After decompression, in the VS installation directory, find VC\Tools\MSVC\version number\include, create a folder named openglfiles, and then put the glut.h header file just decompressed into it

                ③: Open the 64-bit folder of VC\Tools\MSVC\version number\lib, put glut.lib and glut32.lib in it

                ④: Open C:Windows\SysWOW64 (64-bit) and put glut.dll and glut32.dll into it

                ⑤: Create a new VS project, open the project in the top menu, open the management NuGet package, search for nupengl, install one at will, avoid tedious addition of the header file path, and only need to install this package every time a new project is created

                GLFW(Graphics Library Framework):

                ①: A more powerful and flexible tool library, providing more functions and control. It supports modern OpenGL features, multi-window management, input handling, time management, window controls, and more. GLFW has better cross-platform support and is suitable for developing more complex and advanced OpenGL applications. Official download link: Download | GLFW

                ②: Download according to your own version, open the VS installation path, find VC\Tools\MSVC\version number\include, create a GLFW folder, and then open the GLFW folder glfw-3.3.6.bin.WIN64\include\GLFW, Put the glfw3.h and glwf3native.h folders into the newly created GLFW folder

                ③: Open the lib-vc2022 folder glfw-3.3.6.bin.WIN64\lib-vc2019 in the downloaded folder (according to your own version), and then put glfw3.lib, glfw3_mt.lib, glfw3dll.lib into VC\ In the 64-bit folder of Tools\MSVC\version number\lib

                ④: Put glfw3.dll in C:\Windows\SysWOW64, and the installation is complete.

       2: In addition, you need to download GLAD (OpenGL Loader Generator), which is an independent tool library for generating OpenGL loader tools, loading and managing OpenGL function pointers, and automatically generating code for loading OpenGL functions . GLAD simplifies the process of loading and managing OpenGL function pointers, making it more convenient and unified when using OpenGL

                ①: Official website download: http://glad.dav1d.de/

                ②: Just select gl in the API and Core in the Profile, and then GENERATE

                ③: Download the final Zip file, open glad\include, copy the glad and KHR folders to VC\Tools\MSVC\14.31.31103\include in the VS installation directory

                ④: There is no need to change the location of glad.c. If the project uses GLAD, just copy glad.c to the source file directory!

3: Create a new cpp file and enter the following test code

#include <math.h>
#define GLUT_DISABLE_ATEXIT_HACK
#include <gl/glut.h>

void myDisplay()
{
    glClear(GL_COLOR_BUFFER_BIT);
    glColor3f(1.0, 0.0, 0.0);
    //画分割线,分成四个视见区  
    glViewport(0, 0, 400, 400);
    glBegin(GL_LINES);
    glVertex2f(-1.0, 0);
    glVertex2f(1.0, 0);
    glVertex2f(0.0, -1.0);
    glVertex2f(0.0, 1.0);
    glEnd();

    //定义在左下角的区域  
    glColor3f(0.0, 1.0, 0.0);
    glViewport(0, 0, 200, 200);
    glBegin(GL_POLYGON);
    glVertex2f(-0.5, -0.5);
    glVertex2f(-0.5, 0.5);
    glVertex2f(0.5, 0.5);
    glVertex2f(0.5, -0.5);
    glEnd();

    //定义在右上角的区域  
    glColor3f(0.0, 0.0, 1.0);
    glViewport(200, 200, 400, 400);
    glBegin(GL_POLYGON);
    glVertex2f(-0.5, -0.5);
    glVertex2f(-0.5, 0.5);
    glVertex2f(0.5, 0.5);
    glVertex2f(0.5, -0.5);
    glEnd();

    //定义在左上角的区域  
    glColor3f(1.0, 0.0, 0.0);
    glViewport(0, 200, 200, 400);
    glBegin(GL_POLYGON);
    glVertex2f(-0.5, -0.5);
    glVertex2f(-0.5, 0.5);
    glVertex2f(0.5, 0.5);
    glVertex2f(0.5, -0.5);
    glEnd();

    //定义在右下角  
    glColor3f(1.0, 1.0, 1.0);
    glViewport(200, 0, 400, 200);
    glBegin(GL_POLYGON);
    glVertex2f(-0.5, -0.5);
    glVertex2f(-0.5, 0.5);
    glVertex2f(0.5, 0.5);
    glVertex2f(0.5, -0.5);
    glEnd();
    glFlush();
}

int main(int argc, char* argv[])
{
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_RGB | GLUT_SINGLE);
    glutInitWindowPosition(100, 100);
    glutInitWindowSize(400, 400);
    glutCreateWindow("OpenGL程序");
    glutDisplayFunc(&myDisplay);
    glutMainLoop();
    return 0;
}

run, you can get

 At this point, our OpenGL PT1 ends here!

PT2 will be updated later, and the function library will be introduced in more detail

Guess you like

Origin blog.csdn.net/leikang111/article/details/131595694