Coordinate system and shaders in OpenGL ES Primer trip --OpenGL rendering process

OpenGL coordinate system

Before learning the coordinate system, we start to understand the dimensions of graphics:

Product 2D Cartesian coordinates

X and Y axes perpendicular to each other, which together define an XY plane, that is to say, in any coordinate system, if the two perpendicular axes to define a plane, only if a two-axis system, then only a It used to draw graphics plane.

3D Cartesian coordinate system product

(Where the Z-axis is perpendicular to the actual X, Y-axis, so in order to facilitate the drawing represents only) 3D Cartesian coordinate system on the basis of multiple 2D coordinate axes depth. I believe we have heard right-handed, left-handed, diagrammed below:
Left-handed: stretched left hand, the thumb points in the positive direction of the X-axis, Y-axis positive direction pointing index finger, other fingers pointing three positive Z direction. Right-handed coordinate system: stretched right hand, the thumb points in the positive direction of the X-axis, Y-axis positive direction pointing index finger, other fingers pointing three positive Z direction. Note the difference between left and right-handed coordinate system that both the Z-axis direction is reversed . In fact, we mentioned above is actually a right-handed Cartesian coordinate system.

Let's first take a brief look at the concept of several OpenGL coordinate system:

OpenGL camera coordinate system

Camera coordinate system (camera coordinate system), also known coordinates viewer, OpenGL viewer is observing the position of the origin.

Local coordinate system (body coordinate system), the world coordinate system, inertial coordinate system

Local coordinate system : referred to as a local coordinate system and the coordinate system of the object / object coordinate system, which is associated with a particular object, each object has its own specific coordinate system. Coordinate system between different objects independent of each other, when a moving or rotating object occurs, the occurrence of the same object coordinate system translation or rotation of the synchronous movement between the object and the object coordinate system, bind to each other. (For chestnut: a proper way to remember the subject entitled people say arbitrary when in motion, everyone has their own direction of a coordinate system, people in the process of movement, the direction of each person is different. They are in accordance with its own object coordinate system movement, is not affected by others.)

World coordinate system : a special world coordinate system is a coordinate system that establishes the reference system described in other coordinate systems need. In other words, you can use the world coordinate system to describe the location of all coordinate systems or other objects. And the world coordinate system is fixed.

Inertial coordinate system : an inertial coordinate system is a world coordinate system in order to simplify conversion to an inertial coordinate system is generated. Origin of the inertial coordinate system with the origin of the object coordinate system coincides with the axis of the inertial coordinate system is parallel to the axis of the world coordinate system. After the introduction of the inertial coordinate system, the object coordinate system into the inertial coordinate system simply rotate, convert from the inertial coordinate system to the world coordinate system just translating.

So, a graphic displayed on the screen, the coordinate system conversion How much space will undergo vertex coordinate data from it?

In the above figure, OpenGL only defines the cutting coordinate system, standardization of device coordinates and screen coordinates, but object coordinate system) in the world coordinate system and camera coordinate system are designed for the convenience of the user-defined coordinate system. Wherein the model transformation, view transformation, projection transformation, these transformations can be specified as required by the user, the content is completed in the vertex shader; perspective division, viewport transformation, these two steps are performed automatically OpenGL, vertex shader stage after processing is completed. Let's shoot the object (ie, three-dimensional objects from two-dimensional images to process) through the camera to show how the above coordinate transformation process:

1. model transformation

We will put an object in a scene, it should be equivalent model transformation. Model transform is performed in the world coordinate system, the center of the object model is positioned at the center of the coordinate system, by performing the translation of the object model (glTranslate), scaling (glScale), rotation (glRotate) and other operations to adjust the object model World coordinate system position.

2. Depending transformation

The camera on the tripod, it is aligned three-dimensional object, which corresponds to the viewpoint position adjustment in OpenGL, i.e. depending on the conversion. After transformation model, the coordinates of the object are in the world coordinate system is converted depending on the viewpoint position and orientation determining objects in the scene. In the actual shooting of the object, we can maintain the position of the object does not move, adjust the distance and angle of the camera distance to the object, which is equivalent to the viewpoint conversion, we can maintain a fixed position of the camera, the object away from the camera, which corresponds to the model conversion. In fact, it can be seen in OpenGL to rotate counter-clockwise to clockwise rotation of the object is equivalent to the camera.

Depending on the variation can be understood as follows: The position and angle of the camera, and then observe the object in the world coordinate system. In the left part of the figure, the camera is placed in the world coordinate system, the object model in the right part of the world coordinate system transformation sucked into the camera space (i.e., a view of the space).

3. projective transformation

After transformation and the model view transformation, the object has a position in the scene on the desired presence, at this time we chose to adjust the camera lens and the focal length of the camera, so that the three-dimensional object projected on a two-dimensional film, which is equivalent to the OpenGL 3D process model projected on the two-dimensional screen that OpenGL projection transformation. The following look at the OpenGL projection:

Perspective projection: i.e. from a large object near the viewpoint, the viewpoint from the distant object is small, that is, the extreme far disappears. Orthographic: called parallel projection. Such projection frustum is a rectangular parallelepiped conduit, i.e. a rectangular parallelepiped, a maximum orthographic projection characteristic is that no matter how far away from the camera body, the projection of the object sizes change. If the development process, rendering a two-dimensional plane graphics, you can use the orthographic way, it would not exist "near the far smaller" issue. If desired three-dimensional graphics rendering, to ensure the realism of three-dimensional graphics, on the use of perspective projection. (In three-dimensional graphics, an orthographic projection may be used, but in this process is displayed, no stereoscopic effect).

4. perspective division perspective division OpenGL This step is performed automatically, at the stage after the completion of the vertex shader processing, we do not need to deal with the developer.

Shader rendering pipeline

FIG vertex shader processing element after entering the vertices of the primitive assembly stage. This stage performs clipping, perspective division and viewport transform operation.

Let us first explain the process of primitive assembly:

1. coordinate system:

Or the vertex of the object input into the local coordinate space OpenGL, which is most likely used to model coordinate space and an object is stored. After the vertex shader executes, the vertex position is considered to be in the coordinate space clipping. Vertex position to complete from the local coordinate system (i.e., object coordinates) clipping coordinates by a transformation matrix corresponding to the line loads and executes the conversion, these matrices stored in unity corresponding to the vertex shader variables defined.

2. Cut

To avoid visual processing elements outside the view volume, primitive is clipped to the clip space. After performing vertex position in the vertex shader clipping coordinate space. By clipping coordinates (x, y, z, w) coordinate of the specified kind. Vertex coordinates defined in clip space (x, y, z, w) of the cutting body according to scene. The figure is a cut body 6 is defined by the clipping plane, called the near, far, left, right, upper, lower clipping plane.

Cutting stage is shown in the figure will cut crop material each primitive. For each entity type, do the following: Crop triangle: If any triangles completely cut inside the view volume, is not performed. If the triangle is completely in vitro scene, the triangle is discarded. If the scene in three tapered body portion, the corresponding screen in accordance with a triangular cut. The cutting operation generates a new vertex, the triangle vertices are clipped to the plane of the fan. Cutting Line: straight lines completely inside the view frustum, not cut any execution. If the line is completely outside the view frustum, then the line was abandoned. Cutting point: If the point out of the plane near or far plane, were abandoned. Otherwise it will not change through the stage. After the split vertex coordinates after perspective cut into normalized device coordinates. Normalized device coordinates in the range of -1.0 to 1.0.

3. perspective divide

Perspective divide the crop coordinates (x, yy, z, w) designated point projected onto the screen or the viewport. The projection operation (x, y, z) execute (x / w), (y / w), then (z / w), we get the normalized device coordinates (x ', y', z '). These coordinates are called normalized device coordinates. These normalized coordinates according to the size of the viewport will be converted to a real screen (or window). Standardization of the z-coordinate specified by glDepthRangef near and far depth value into a z-values ​​of the screen. These conversions are in the viewport transformation stage.

4. viewport transformation

Viewport transformation as professional interpretation: OpenGL performing perspective division to thereby cut the coordinate transform them into normalized device coordinates. OpenGL uses parameters to map inside glViewPort normalized device coordinates to screen coordinates, are associated with each coordinate point on a screen. This process is called a viewport transformation. The viewport transformation is a projection of the body visual object displayed on a two-dimensional plane of the viewport. After that we use the camera shooting is complete, rinse the film, decided photos zoom in and out, it is comparable to the OpenGL viewport transformation.

Of course, this step is performed automatically OpenGL, after completion of the vertex shader stage process.

void glViewport(GLint x, GLint y, GLsizei w, GLsizei h)
x, y, 指定视口左下角的窗口坐标,以像素表示
w, h,指定视口的宽度和高度,这些值必须大于0
复制代码

From the above flowchart primitive assembly may know: During rendering, the two kinds of discolored must be stored, a discolored with each vertex, a fragment is a discolored. Is discolored with vertex shader first frame, the still pictures discolored element is a last frame. In the vertex process vertex is discolored, the fragment processing discolored color pixels. Here we change an intuitive way to render a flowchart of view:

1. Create the vertex.

2. By then render the vertex shader.

3. The link information item, connected through the respective vertex geometry.

4. rasterization: determining the position of the pixel is actually drawn on the screen, and then processing these fragments (fragment shader input to) a fragment shader.

The rasterization stage generate each fragment shader to perform this.

6. The final exhibit graphics.

5. rasterization: After the vertex transformation and clipping primitive, rasterizer obtain a separate line primitives and fragments generated for the corresponding element in FIG. Each segment is identified by the integer positions in screen space (x, y).

Primitive is rasterized into a set of two-dimensional section of the process: when rendering a primitive, rasterizer (Rasterization) stage often result in more vertices than the original fragment specified. Grating will determine the position of each segment in accordance with the fragments located on the opposite primitive position. Based on these positions, it interpolates (to Interpolate) all input variables fragment shader. For example, we have a line, above the endpoint is green, the following endpoints are blue. If 70% of a fragment shader position of the line segment runs, it would be a color input attribute linear combination of green and blue, that is, more precisely 30% green + 70% blue.

Reproduced in: https: //juejin.im/post/5d0ae0b56fb9a07eca6980f3

Guess you like

Origin blog.csdn.net/weixin_33964094/article/details/93182328