OpenGL Learning Series--Basic Drawing Process

Start exploring the wonderful 3D world, OpenGL does it.

Introduction to OpenGL

OpenGL is an application programming interface, a software library that provides access to features of graphics hardware devices.

Key point: OpenGL is an interface. Since it is an interface, there must be an implementation.

In fact, its implementation is provided by the display device manufacturer and depends on the hardware device provided by the manufacturer.

OpenGL is commonly used in CAD, virtual reality, scientific visualization programs, and video game development.

OpenGL ES is used on Android, which is a subset of OpenGL. On the basis of OpenGL, some unnecessary parts are cut off. It is mainly designed for embedded devices such as mobile phones, PADs and game consoles.

Developing OpenGL on Android can be done using either Java or C. Without further ado, just roll up your sleeves and do it!

OpenGL drawing process

To learn OpenGL drawing, it is best to start with 2D drawing and gradually transition to 3D drawing.

Android provides a specific view for OpenGL drawing GLSurfaceView. Just like SurfaceView, it can also render and draw in a separate thread instead of the main thread. After all, GLSurfaceView inherits from SurfaceView.

When using GLSurfaceView, you need setRendererto set a renderer for it through the method, and the main rendering work is done by the renderer Renderer.

GLSurfaceView.RendererTo implement our own renderer program by inheriting classes, there are three main methods:

  • onSurfaceCreated
    • Called when GLSurfaceView is created, mainly to do some preparatory work.
  • onSurfaceChanged
    • Called when the GLSurfaceView view changes, and also when it is first created.
  • onDrawFrame
    • Called every frame drawing.

When implementing a renderer program, there are three issues to consider first:

  • Where to draw?
  • What shape is it drawn in?
  • What color to paint with?

And our program is mainly to solve the above three problems, the following is an explanation of drawing a point with OpenGL.

OpenGL coordinates

The coordinates of the mobile phone screen take the upper left corner as the origin (0, 0), the right is the X-axis square, and the downward is the positive direction of the Y-axis, and OpenGL also has its own set of coordinate definitions.

Suppose we define the coordinates of a point (4.3, 2.1), that is, its X-axis and Y-axis coordinates, and OpenGL will finally map the coordinates we define to the actual physical coordinates of the phone screen.

Regardless of the X coordinate or the Y coordinate, OpenGL will map the phone screen to the range of [-1, 1]. That is to say: the left side of the screen corresponds to -1 on the X axis, the right side of the screen corresponds to +1, the bottom edge of the screen corresponds to -1 on the Y axis, and the top edge of the screen corresponds to +1.

This coordinate range is the same regardless of the shape and size of the screen, as shown in the following image:

https://user-gold-cdn.xitu.io/2018/5/8/1633d19c44e08a4f?w=258&h=240&f=png&s=16011

Therefore, the coordinates (4.3, 2.1) defined above will eventually be mapped to the outside of the phone screen and will be invisible.

Here, assuming that a point (0, 0) at the origin is drawn, then the position after mapping is the center of the phone screen.

basic primitives

After solving the problem of position, the next step is the problem of shape and color.

Just as Android's Canvas object provides some methods to complete basic drawing: drawPoint, drawRect, drawLine, etc., OpenGL programs also provide and only provide three basic primitives to complete drawing.

  • point
  • String
  • triangle

All other shapes are based on these three primitives. For example, a rectangle can be seen as two triangles.

Since we want to draw a point, in the coordinate system, a coordinate can replace a point. Suppose you want to draw a triangle, then you need three points in the coordinate system.

Next, it involves how OpenGL draws the data of the defined points.

render pipeline

The first thing to understand is a concept 渲染管线.

According to the definition of Baidu Encyclopedia, the rendering pipeline, also known as 渲染流水线or 像素流水线or 像素管线, is a parallel processing unit that processes graphics signals inside the display chip (GPU) independently of each other.

The rendering pipeline of the graphics card is an important part of the display core, which is a set of special channels responsible for color matching the graphics. The number of rendering pipelines is one of the most important parameters to determine the performance and grade of the display chip.

Graphics cards at this stage are divided into 顶点渲染and 像素渲染. In the graphics card, the interior is divided into two major areas. One area is the vertex rendering unit (also called vertex shader), which is mainly responsible for drawing graphics, that is, building models. One area is the pixel rendering pipeline, which is mainly responsible for coloring the graphics drawn by the vertices.

https://user-gold-cdn.xitu.io/2018/5/8/1633d19c4a073089?w=1614&h=410&f=png&s=272841

The above figure is a processing flow of the rendering pipeline in OpenGL.

As you can see, the flowchart starts with reading vertex data, and then executes two shaders:

  • vertex shader
    • It is mainly responsible for drawing graphics, that is, establishing a graphics model based on vertex coordinates.
  • fragment shader
    • Mainly responsible for coloring the graphics drawn by the vertices.

Since these two shaders are critical to the final graphics display, and they can still be controlled by programming, this is why 可编程渲染管线they are better than fixed programming pipelines.

In fact, with the development of display technology, the rendering pipeline will cease to exist, and the vertex shader and rendering pipeline will be 流处理器(Stream Processors)replaced uniformly.

But at present, OpenGL on mobile phones still uses the rendering pipeline. With the rendering pipeline, we can complete the two major problems of point shape drawing and coloring. The next work is also started around this rendering pipeline.

memory copy

When the vertex coordinates are defined and the next step is clarified: the vertex coordinates will undergo a series of processing through the rendering pipeline , then the next step is how to pass the vertex coordinates to the rendering pipeline.

The implementation of OpenGL is provided by the display device manufacturer, and it runs directly on the hardware as a native system library. The vertex Java code we define runs on the virtual machine, which involves how to copy the memory of the Java layer to the Native layer.

One method is to use the JNIdevelopment directly, directly calling the local system library, that is, C++to develop OpenGL, this implementation must be learned.

Another way is to copy the memory block at the Java layer to the Native layer.

Using the ByteBuffer.allocateDirect()method, you can allocate a piece of Native memory, which will not be managed by Java's garbage collector.

It is used in roughly the same way, extracting the common template:

   // 声明一个字节缓冲区 FloatBuffer
   private FloatBuffer floatBuffer;
   // 定义顶点数据
   float[] vertexData = new float[16];
   // FloatBuffer 初始化工作并放入顶点数据
   floatBuffer = ByteBuffer
       .allocateDirect(vertexData.length * Constant.BYTES_PRE_FLOAT)
       .order(ByteOrder.nativeOrder())
       .asFloatBuffer()
       .put(vertexData);

After the allocateDirectmethod has allocated memory and specified a size, the next step is to tell the ByteBuffer to 本地字节序organize its contents accordingly. Native endianness means that when a value occupies multiple bytes, such as a 32-bit integer, the bytes are arranged from most significant to least significant or vice versa.

The next asFloatBuffermethod can get an instance of the FloatBuffer class that reflects the underlying bytes, avoiding direct manipulation of individual bytes and using floating-point numbers instead.

Finally, putthe data can be copied from the Java layer memory to the Native layer through the method, and this memory will be released when the process ends.

vertex shader

Next comes the programmable part, defining the shader ( Shader) program.

Use different shaders to perform computations on incoming primitives, determining their position, color, and other rendering properties.

The first is the vertex shader.

For each vertex coordinate position transmitted in the rendering pipeline, OpenGL will call a vertex shader to process the vertex-related data. This process can be complex or simple.

If you want to define a shader program, you have to write it through a special language: OpenGL Shading Language, short GLSLfor .

GLSLThe language is similar to the C language or the Java language, and its program entry is also a mainfunction named. Regarding the part of GLSL, it is entirely possible to write a separate blog, and I will not elaborate for the time being.

Here is a simple vertex shader program:

attribute vec4 a_Position;
void main()
{
    gl_Position = a_Position;
    gl_PointSize = 30.0;
}

Shaders are similar to a function call - data is passed in, processed, and then passed out.

where, gl_Positionand gl_PointSizeare special global variables in the shader that receive input.

a_Positionis a variable we define, it is of vec4type. It can attributeonly exist in vertex shaders, generally used to save vertex data, it can read data in the data buffer.

The vertex coordinates in the data buffer are assigned to a_Position, and a_Position is passed to gl_Position.

And gl_PointSize fixes the point size to 30.

With the vertex shader in place to generate the final position for each vertex, the next step is to define the fragment shader.

According to the rendering pipeline in the above figure, between the vertex shader and the fragment shader, and is also 组装图元required 光栅化图元.

Rasterization Technology

The display screen of a mobile device is made up of hundreds or thousands of small, individual parts, as they call them 像素. Each pixel is usually made up of three separate subcomponents that emit red, green, and blue light, and because each pixel is so small, the human eye mixes the red, green, and blue light, resulting in Create a huge range of colors.

OpenGL uses the process of rasterization technology to decompose each point, line and triangle into a large number of small fragments, which can be mapped to the pixels of the mobile device display to generate an image. These fragments are similar to pixels on a display, each containing a single solid color.

As shown below:

https://user-gold-cdn.xitu.io/2018/5/8/1633d19c4a364568?w=285&h=240&f=png&s=39244

OpenGL maps a straight line into a collection of fragments through rasterization technology, and the display system usually maps these fragments directly to the pixels on the screen. As a result, a fragment corresponds to a pixel.

After understanding this display principle, you can do some operations in it. This is the function of the fragment shader.

fragment shader

The main purpose of the fragment shader is to tell the GPU what the final color of each fragment should be.

The fragment shader is called once for each fragment of the primitive, so if a triangle is mapped to 10,000 fragments, the fragment shader is called 10,000 times.

Here is a simple fragment shader program:

precision mediump float;
uniform vec4 u_Color;
void main()
{
    gl_FragColor = u_Color;
}

Among them, the gl_FragColorvariable is the global variable of the color finally rendered by OpenGL, and u_Colorit is the variable we defined. By binding to the u_Colorvariable and assigning a value to it, it will be passed to the Native layer gl_FragColor.

The first line mediumprefers to the precision of the fragment shader. There are three options. Here, use medium precision. uniformIt means that the variable is immutable, that is, the color is fixed, and it is good to display the fixed color at present.

Compile OpenGL programs

After understanding the function of the shader and the rasterization technology, the process of the rendering pipeline will be clearer, and the next step is to compile the OpenGL program.

The basic process of compiling an OpenGL program is as follows:

  • Compile the shader
  • Create OpenGL program and shader link
  • Validate OpenGL programs
  • Make sure to use an OpenGL program

Compile the shader

Create a new file to write the shader program, and then read the file content as a string from the file. This is cleaner than writing the shader program as a string.

When the shader program content has been read, it can be compiled.

   // 编译顶点着色器
    public static int compileVertexShader(String shaderCode) {
        return compileShader(GL_VERTEX_SHADER, shaderCode);
    }
    
    // 编译片段着色器
    public static int compleFragmentShader(String shaderCode) {
        return compileShader(GL_FRAGMENT_SHADER, shaderCode);
    }

    // 根据类型编译着色器
    private static int compileShader(int type, String shaderCode) {
	    // 根据不同的类型创建着色器 ID
        final int shaderObjectId = glCreateShader(type);
        if (shaderObjectId == 0) {
            return 0;
        }
        // 将着色器 ID 和着色器程序内容连接
        glShaderSource(shaderObjectId, shaderCode);
        // 编译着色器
        glCompileShader(shaderObjectId);
		// 以下为验证编译结果是否失败
        final int[] compileStatsu = new int[1];
        glGetShaderiv(shaderObjectId, GL_COMPILE_STATUS, compileStatsu, 0);
        if ((compileStatsu[0] == 0)) {
	        // 失败则删除
            glDeleteShader(shaderObjectId);
            return 0;
        }
        return shaderObjectId;
    }

The above program mainly glCreateShadercreates the shader ID through the method, then glShaderSourceconnects the shader program content, then glCompileShadercompiles the shader, and finally passes the glGetShaderivverification to see if it fails.

glGetShaderivThe function is more general, and it is used to verify the results in both the shader stage and the OpenGL program stage.

Create OpenGL program and shader link

The next step is to create an OpenGL program and add shaders to it.

  public static int linkProgram(int vertexShaderId, int fragmentShaderId) {
		// 创建 OpenGL 程序 ID
        final int programObjectId = glCreateProgram();
        if (programObjectId == 0) {
            return 0;
        }
        // 链接上 顶点着色器
        glAttachShader(programObjectId, vertexShaderId);
        // 链接上 片段着色器
        glAttachShader(programObjectId, fragmentShaderId);
        // 链接着色器之后,链接 OpenGL 程序
        glLinkProgram(programObjectId);
        final int[] linkStatus = new int[1];
        // 验证链接结果是否失败
        glGetProgramiv(programObjectId, GL_LINK_STATUS, linkStatus, 0);
        if (linkStatus[0] == 0) {
	        // 失败则删除 OpenGL 程序
            glDeleteProgram(programObjectId);
            return 0;
        }
        return programObjectId;
    }

First glCreateProgramcreate an OpenGL program programmatically, then verify that the link fails by glAttachShaderadding the shader program ID to the OpenGL program, then glLinkProgramlinking the OpenGL program, and finally passing .glGetProgramiv

Validate OpenGL programs

After linking the OpenGL program, it is to verify that OpenGL is available.

 public static boolean validateProgram(int programObjectId) {
        glValidateProgram(programObjectId);
        final int[] validateStatus = new int[1];
        glGetProgramiv(programObjectId, GL_VALIDATE_STATUS, validateStatus, 0);
        return validateStatus[0] != 0;

    }

Pass the glValidateProgramfunction validation, and pass the glGetProgramivfunction validation again if it fails.

Make sure to use an OpenGL program

When everything is done, it's time to use the OpenGL program.

// 创建 OpenGL 程序过程
 public static int buildProgram(Context context, int vertexShaderSource, int fragmentShaderSource) {
        int program;

        int vertexShader = compileVertexShader(
                TextResourceReader.readTextFileFromResource(context, vertexShaderSource));

        int fragmentShader = compleFragmentShader(
                TextResourceReader.readTextFileFromResource(context, fragmentShaderSource));

        program = linkProgram(vertexShader, fragmentShader);

        validateProgram(program);

        return program;
    }

// 创建完毕后,确定使用
    mProgram = ShaderHelper.buildProgram(context, R.raw.point_vertex_shader
                , R.raw.point_fragment_shader);

        glUseProgram(mProgram);

Combining the above processes, buildProgramthe OpenGL program ID returned by the function is sufficient, and the glUseProgramfunction indicates that the OpenGL program is to be used.

draw

Completing the compilation of the OpenGL program is the final drawing, and then returns to the Rendererrenderer .

public class PointRenderer extends BaseRenderer {

    private Point mPoint;
    public PointRenderer(Context mContext) {
        super(mContext);
    }

    @Override
    public void onSurfaceCreated(GL10 gl, EGLConfig config) {
        super.onSurfaceCreated(gl, config);
        glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
        // 在 onSurfaceCreated 里面初始化,否则会报线程错误
        mPoint = new Point(mContext);
        // 绑定相应的顶点数据
        mPoint.bindData();
    }
    
    @Override
    public void onSurfaceChanged(GL10 gl, int width, int height) {
        // 确定视口大小
        glViewport(0, 0, width, height);
    }

    @Override
    public void onDrawFrame(GL10 gl) {
        // 清屏
        glClear(GL_COLOR_BUFFER_BIT);
        // 绘制
        mPoint.draw();
    }
}

onSurfaceCreatedDo initialization work in the function, bind data, etc.; onSurfaceChangeddetermine the size of the view in the method, onDrawFrameand perform drawing in it.

In order to simplify the rendering process, all operations are placed in the object to be rendered, and a Point object is declared, representing the point to be drawn.

public class Point extends BaseShape {

    // 着色器中定义的变量,在 Java 层绑定并赋值
    private static final String U_COLOR = "u_Color";
    private static final String A_POSITION = "a_Position";
    private int aColorLocation;
    private int aPositionLocation;
    
    float[] pointVertex = {
            0f, 0f
    };

    public Point(Context context) {
        super(context);
        mProgram = ShaderHelper.buildProgram(context, R.raw.point_vertex_shader
                , R.raw.point_fragment_shader);
        glUseProgram(mProgram);
        vertexArray = new VertexArray(pointVertex);
        POSITION_COMPONENT_COUNT = 2;
    }

    @Override
    public void bindData() {
        //绑定值
        aColorLocation = glGetUniformLocation(mProgram, U_COLOR);
        aPositionLocation = glGetAttribLocation(mProgram, A_POSITION);
        // 给绑定的值赋值,也就是从顶点数据那里开始读取,每次读取间隔是多少
        vertexArray.setVertexAttribPointer(0, aPositionLocation, POSITION_COMPONENT_COUNT,
                0);
    }

    @Override
    public void draw() {
        // 给绑定的值赋值
        glUniform4f(aColorLocation, 0.0f, 0.0f, 1.0f, 1.0f);
        glDrawArrays(GL_POINTS, 0, 1);
    }
}

In the constructor of Point, the OpenGL program is compiled and used, and in the bindDatafunction , the variables we declared in OpenGL are bound to the function glGetUniformLocationand the function , it should be noted that the type and the corresponding method of the type are different, and finally By assigning a value to the variable, specify the number of data per vertex to be 2.glGetAttribLocationu_Colora_PositionattributeuniformPOSITION_COMPONENT_COUNT

After binding the variables, the next step is to assign values ​​to them. For uniformtype variables, since they are fixed values, it is glUniform4fenough to directly call the method to assign values ​​to them, while attributetype variables need to correspond to the values ​​in the vertex data. The vertexArray.setVertexAttribPointermethod is complete this task.

	// 给某个顶点数据绑定值,并 Enable 使能
    public void setVertexAttribPointer(int dataOffset, int attributeLocation, int componentCount, int stride) {
        floatBuffer.position(dataOffset);
        glVertexAttribPointer(attributeLocation, componentCount, GL_FLOAT, false, stride, floatBuffer);
        glEnableVertexAttribArray(attributeLocation);
        floatBuffer.position(0);
    }

In the setVertexAttribPointermethod, the glVertexAttribPointermethod is used to bind the value, and its parameter definition is as follows:

https://user-gold-cdn.xitu.io/2018/5/8/1633d19c49dcd183?w=640&h=343&f=png&s=194254

You can use the glEnableVertexAttribArraymethod to open it.

Finally, the glDrawArraysmethod is used to perform the final drawing, which GL_POINTSrepresents the type of drawing, and the parameter 0,1represents the range of the drawn points, which is an interval that is closed on the left and open on the right.

The above steps complete the drawing of a point, as shown in the figure:

https://user-gold-cdn.xitu.io/2018/5/8/1633d19c46dbcb8a?w=151&h=240&f=png&s=8495

For specific code details, you can refer to my Github project:

github.com/glumes/Andr…

summary

The principle of using OpenGL to draw, that is, according to the GPU rendering pipeline process, after the vertex data is provided, the vertex shader is executed, then the fragment shader is executed, and finally it is mapped to the mobile phone screen.

As a programmable stage, we do what we want in the vertex shader and fragment shader, write the shader code, and link it into an OpenGL program by compiling. Then bind the corresponding value to the variable set in OpenGL, and read the value from where the vertex data is. At this point, all preparations are done.

Finally, it starts to draw in the renderer Renderer.

refer to

1. "OpenGL ES Application Development Practice Guide" 2. "OpenGL Programming Guide" (the eighth edition of the original book) 3. https://github.com/glumes/AndroidOpenGLTutorial

question

1、https://stackoverflow.com/questions/11286819/opengl-es-api-with-no-current-context

Prompt error:

call to OpenGL ES API with no current context (logged once per thread)

This is because variables are initialized in the Render constructor, not in the OpenGL thread. but in the main thread.

Finally, if you think the article is good, please pay attention to the WeChat public account: [On paper]

On paper

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325966897&siteId=291194637