Do-it-yourself implementation of OpenGL-OpenGL turned out to be so simple (3)

 

 The last article to write opengl by yourself stopped updating due to time reasons, and now continue.

 

1. First we define the following variables

 

	public static class DisplayInfo{
		Canvas canvas;
		int height;
		int width;
	}
	private static DisplayInfo mInfo;
	
	public static void initDrawEnvirement(DisplayInfo info){
		mInfo = info;
	}
	
	public enum MatrixMode{
		MODE_MODEL_VIEW,
		MODE_PROJECTION
	}
	//for testing
	public static M4 mCurrentSurfaceViewProjectionMatrix = null;
	public static M4 mCurrentSurfaceViewModelViewMatrix = null;
	
	public static Stack<M4> mModelViewMatrixStack = new Stack<M4>();
	public static Stack<M4> mProjectionMatrixStack = new Stack<M4>();
	public static M4 mCurrentModelViewMatrix = new M4();
	
	public static M4 mCurrentProjectionMatrix = new M4();
	
	
	public static M4 mCurrentViewPortMatrix = new M4();
	public static MatrixMode mMatrixMode = MatrixMode.MODE_MODEL_VIEW;
	public static float mViewPortZNear = 0.0f;
	public static float mViewPortZFar = 1.0f;
	public static  GLColor mVertexColor = new GLColor(0, 0, 0);
	public static ArrayList<GLVertex> mVertexList = new ArrayList<GLVertex>();

 From the above, we can see that our variables basically have the following

 

1. ModelView matrix

2.Project matrix

3.Viewport matrix

These matrices constitute everything that converts (x, y, z) in a 3D space to a 2D space. Let's see how they implement them. Several functions related to matrices are as follows:

 

matrix mode

    public static void glMatrixMode(MatrixMode mode){
        mMatrixMode = mode;
    }

 

matrix specification

glLoadIdentity

	public static void glLoadIdentity(){
		if(mMatrixMode == MatrixMode.MODE_MODEL_VIEW){
			mCurrentModelViewMatrix.setIdentity();
		}else{
			mCurrentProjectionMatrix.setIdentity();
		}
	}

 

matrix save

glPushMatrix

	public static void  glPushMatrix(){
		if(mMatrixMode == MatrixMode.MODE_MODEL_VIEW){
			mModelViewMatrixStack.push(new M4(mCurrentModelViewMatrix));
		}else{
			mProjectionMatrixStack.push(new M4(mCurrentProjectionMatrix));
		}
	}

 

matrix recovery

glPopMatrix

	public static void  glPopMatrix(){
		if(mMatrixMode == MatrixMode.MODE_MODEL_VIEW){
			mCurrentModelViewMatrix = mModelViewMatrixStack.pop();
		}else{
			mCurrentProjectionMatrix = mProjectionMatrixStack.pop();
		}
	}

 

Matrix modification

glMultMatrix

	public static void glMultMatrix(M4 m){
		if(mMatrixMode == MatrixMode.MODE_MODEL_VIEW){
			mCurrentModelViewMatrix.multiply(m);
		}else{
			mCurrentProjectionMatrix.multiply(m);
		}
	}

ViewPort matrix assignment

 

    public static void glDepthRangef(
        float zNear,
        float zFar
    ){
    	mViewPortZNear = zNear;
    	mViewPortZFar = zFar;
    }

 

	public static   void glViewport(
            int x,
            int y,
            int width,
            int height
     )
	{
		int surfaceHeight = mInfo.height;
		float far = mViewPortZFar;
		float near = mViewPortZNear;
		float sx = width/2.0f;
		float ox = sx + x;
		float sy = height/2.0f;
		float oy = sy + surfaceHeight - height - y;   
		float A = (far - near)/2.0f;
		float B = (far + near)/2.0f;
		// compute viewport matrix
		float[][] f = new float[4][4];
	    f[0][0] = sx;  f[0][1] = 0;   f[0][2] = 0;  f[0][3] = ox;
		f [1] [0] = 0; f [1] [1] = -sy; f [1] [2] = 0; f [1] [3] = oy;
		f[2][0] = 0;   f[2][1] = 0;   f[2][2] = A;  f[2][3] = B;
		f[3][0] = 0;   f[3][1] = 0;   f[3][2] = 0;  f[3][3] = 1;
		mCurrentViewPortMatrix = new M4();
		mCurrentViewPortMatrix.m = f;
	}

 Now all the matrices are there, the only thing is to specify the vertices and the drawing method

2. Then we specify the vertices

 

	public static  void glVertexPointer(
            int size,
            int type,
            int stride,
            java.nio.Buffer pointer)
	{
		
		if((type!= GL10.GL_FLOAT && type!= GL10.GL_FIXED) ||size != 3){
			throw new RuntimeException("this lib only support GL_FLOAT GL_FIXED type and size must equals 3, stride must equals 0!");
		}
		mVertexList.clear();
		
		int capacity = pointer.capacity();
		pointer.position(0);
	
		while(true){
			if(capacity >= size){
				capacity-=size;
				GLVertex verTex = new GLVertex();
				if(type == GL10.GL_FLOAT){
					verTex.x= ((FloatBuffer)pointer).get();
					verTex.y= ((FloatBuffer)pointer).get();
					verTex.z= ((FloatBuffer)pointer).get();
				}else if(type == GL10.GL_FIXED){
					
					verTex.x= ((IntBuffer)pointer).get()>>16;
					verTex.y= ((IntBuffer)pointer).get()>>16;
					verTex.z= ((IntBuffer)pointer).get()>>16;
				}
				mVertexList.add(verTex);
				if(capacity >= stride){
					capacity -= stride;
					for(int i = 0; i < stride; ++i){
						if(type == GL10.GL_FLOAT){
							((FloatBuffer)pointer).get();
						}else if(type == GL10.GL_FIXED){
							((IntBuffer)pointer).get();
						}
					}
				}else{
					break;
				}
			}else{
				break;
			}
		}
		
	}

 Doesn't it look very simple, the next step is to draw

3. Finally we draw the image

	public static void glDrawElements(int mode, int mIndexCount,
			int type, Buffer mIndexBuffer) {
		if(mode!= GL10.GL_TRIANGLES){
			throw new RuntimeException();
		}
		
		if((type!= GL10.GL_UNSIGNED_SHORT&&type!=GL10.GL_UNSIGNED_BYTE) || mode != GL10.GL_TRIANGLES){
			throw new RuntimeException("this lib glDrawElements only support GL_TRIANGLES and GL_UNSIGNED_SHORT !");
		}
		
		mIndexBuffer.position(0);
		ArrayList<GLVertex> drawingList = preDealVertex();
		//clearColor();
		int capacity = mIndexCount;
		while(true){
			if(capacity >= 3){
				if(type == GL10.GL_UNSIGNED_SHORT){
					capacity-=3;
					ShortBuffer buffer = ((ShortBuffer)mIndexBuffer);
					GLVertex v1 = drawingList.get(buffer.get());
					GLVertex v2 = drawingList.get(buffer.get());
					GLVertex v3 = drawingList.get(buffer.get());
					drawTriangles(v1,v2,v3);
				}else if(type == GL10.GL_UNSIGNED_BYTE){
					capacity-=3;
					ByteBuffer buffer = ((ByteBuffer)mIndexBuffer);
					GLVertex v1 = drawingList.get(buffer.get());
					GLVertex v2 = drawingList.get(buffer.get());
					GLVertex v3 = drawingList.get(buffer.get());
					drawTriangles(v1,v2,v3);
				}
				
			}else{
				break;
			}
		}
	}

 

 The space transformation is now complete, but this is only the first step in the Long March. There is a lot more to do later:

 

1. Element assembly and cutting:

Primitive assembly occurs before the viewport transformation, and the primitive assembly has been done above, but no clipping has been performed.

So we will study how to crop out the outside of the view volume.

 

There are two main tasks in this stage, one is primitive assembly and the other is primitive processing.

  1. The so-called primitive assembly means that the vertex data is combined into a complete primitive according to the set drawing method. For example, the point drawing method only needs a single vertex, and each vertex is a primitive in this method; the line segment drawing method requires two vertices, and each two vertices constitute a primitive in this method; in the triangle drawing method, three vertices are required. vertices form a primitive.
  2. The most important work of primitive processing is clipping, whose task is to eliminate parts of geometric primitives that lie outside the half-space, which is defined by a clipping plane. For example, point clipping simply accepts or rejects vertices, while line or polygon clipping may require adding additional vertices, depending on the positional relationship between the line or polygon and the clipping plane . Different, it is not always possible to see (display on the device screen) the entirety of a primitive of a particular 3D object
  3. When clipping, if the primitive is completely inside the view volume and the custom clipping plane, the primitive is passed to the next step for processing; if it is completely outside the view volume or the custom clipping plane, the primitive is discarded ; if part of it is inside and another part is outside, the element needs to be clipped.

2. Rasterization: Texture Interpolation

  • 1. Although the geometric information in the virtual 3D world is three-dimensional, since the devices currently used for display are all two-dimensional, before the real rasterization work is performed, the objects summarized in the virtual 3D world must first be projected to the view plane. superior. Depending on the camera position, objects in the same 3D scene projected onto the view plane may have different effects.
  • 2. In addition, since the geometric information of objects in the virtual 3D world is generally represented by continuous mathematical quantities, the projected plane result is also represented by continuous mathematical quantities. However, the screens of current display devices are all discretized, so it is also necessary to discretize the projection results. It is decomposed into discrete small units, these small units are generally called fragments.
    • In fact, each fragment corresponds to a pixel in the frame buffer. The reason why it is not directly called a pixel is because objects in 3D space can occlude each other. Although a 3D scene is finally displayed on the screen as a whole, each primitive of each 3D object is processed independently. In this case, the system first processes the primitives located farther from the viewpoint, rasterized into a set of fragments, and temporarily sent to the corresponding position in the frame buffer. However, if we continue to process the primitives farther from the observation point later, a group of fragments is also rasterized. There are two groups of fragments that correspond to the same position in the frame buffer. At this time, the fragments with a short distance will cover the farther ones. Fragments (the detection of how coverage is done in the depth detection stage). Therefore, a fragment may not necessarily become a pixel on the final screen, and it is inaccurate to call it a pixel. It can be understood as a candidate pixel.
    • Each fragment contains information such as its corresponding vertex coordinates, vertex color, vertex texture coordinates, and vertex depth. It is generated by interpolating the vertex information of

3. Front and rear plane cutting

 

 

 

 

If you need the source code, please download this software to your phone.

http://a.app.qq.com/o/simple.jsp?pkgname=com.wa505.kf.epassword

 

Reference article

http://blog.csdn.net/u013746357/article/details/52799601

 http://www.songho.ca/opengl/gl_transform.html

 

 http://mobile.51cto.com/aengine-437172.htm

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326247144&siteId=291194637