Computer Graphics and OpenGL C++ Version Study Notes Chapter 5 Texture Mapping


Texture mapping is a technique for overlaying an image onto a rasterized model surface. It is one of the most basic and important ways to add realism to a rendered scene.

5.1 Load texture image files

In order to effectively complete texture mapping in OpenGL/GLSL, the following different data sets and mechanisms need to be coordinated:

  • A texture object for saving texture images (in this chapter we only consider 2D images);
  • A special uniformsampler variable so that the vertex shader can access the texture;
  • Buffer used to savetexture coordinates; Vertex attributes used to pass texture coordinates to the pipeline;
  • On the graphics cardtexture unit.

The texture image can be any image. It can be a picture of something man-made or naturally occurring, such as cloth, grass, or a planet's surface; it can also be a geometric pattern, such as the checkerboard pattern in Figure 5.1.
Insert image description here

Figure 5.1 Using two different images to add texture to the same dolphin model [TU16]

In order for a texture image to be used in shaders in the OpenGL pipeline, we need to extract the colors from the image and put them into an OpenGL texture object (a built-in OpenGL structure used to hold texture images).

Many C++ libraries are available for reading and processing image files, common choices include Cimg, BoostGIL, and Magick++. We chose to use a library called SOIL2 designed specifically for OpenGL.

Usually the steps we take to load textures into OpenGL applications are:
(a) Use SOIL2 to instantiate OpenGL texture objects and read data from image files;
(b) Call glBindTexture() to make the newly created texture object active;
(c) Use glTexParameter()Function to adjust texture settings.
The final result obtained is the integer ID of the currently available OpenGL texture object.

To create a texture object, you first need to declare a variable of type GLuint. Next, we call SOIL_load_OGL_texture() to actually generate the texture object. SOIL_load_OGL_texture()The function accepts an image filename as one of its parameters (some other parameters will be described later). These steps are implemented in the following functions:
Insert image description here

We will use this function frequently, so we add it to the Utils.cpp utility class. In this way, our C++ application only needs to call the above loadTexture() function to create the OpenGL texture object, as shown below.

Insert image description here

Where image.jpg is the texture image file, and myTexture is the integer ID of the generated OpenGL texture object. A variety of image file types are supported here, including all of the ones listed earlier.

5.2 Texture coordinates

Now that we have a method for loading texture images into OpenGL, we need to specify how we want the texture to be applied to the object's rendering surface. We do this by specifying texture coordinates for each vertex in the model.

Texture coordinates are references to pixels in a texture image (usually 2D). The pixels in a texture image are called texels to distinguish them from the pixels rendered on the screen.

Texture coordinates are used to map points on the 3D model to positions in the texture. In addition to the (x,y,z) coordinates that position it in 3D space, each point on the model's surface also has texture coordinates (s,t) that specify which texel in the texture image provides it with color. . In this way, the surface of the object is "painted" according to the texture image. The orientation of the texture on the object's surface is determined by the texture coordinates assigned to the object's vertices.

To use texture mapping, you must provide texture coordinates for each vertex in the object to be textured. OpenGL will use these texture coordinates to look up the color of the referenced texel stored in the texture image to determine the color of each rasterized pixel in the model.

To ensure that every pixel in the rendered model is drawn using the appropriate texels from the texture image, the texture coordinates also need to be put into the vertex attributes so that they are also interpolated by the raster shader. In this way, the texture image is interpolated or filled along with the model vertices. (You can watch this)

Insert image description here

For each set of vertex coordinates (x,y,z) that passes through the vertex shader, there is a corresponding set of texture coordinates (s,t). Therefore, we will set up two buffers, one for the vertices (3 components x, y and z in each entry) and one for the corresponding texture coordinates (two components s and t in each entry) ). In this way, each call to the vertex shader receives data for a vertex, which now includes its spatial coordinates and corresponding texture coordinates.

2D texture coordinates are the most common (OpenGL does support some other dimensions, but we won't cover them in this chapter). The 2D texture image is set as a rectangle, with the position coordinates of the lower left corner being (0,0) and the position coordinates of the upper right corner being (1,1). Ideally, texture coordinates should be in the range [0...1].

Consider the example in Figure 5.2. Recall that the cube model is made of triangles. Our diagram highlights the 4 corners of one side of the cube, but remember that you need two triangles for each square side of the cube. The texture coordinates for each of the 6 vertices specifying the sides of this cube are listed along the 4 corners, with the upper left corner and the lower right corner each consisting of a pair of vertices. Texture images are also shown in the example. Texture coordinates (described by s and t) map parts of the image (texels) onto rasterized pixels on the front of the model. Note that all intermediate pixels between vertices have been drawn using texels interpolated from the middle of the image. This is precisely because the texture coordinates are sent to the fragment shader in the vertex attributes and are therefore interpolated like the vertices themselves.

Insert image description here

Figure 5.2 Texture coordinates

This is because the aspect ratio of the texture image does not match the aspect ratio of the given texture coordinates associated with the cube face.

For simple models such as cubes or pyramids, selecting texture coordinates is relatively easy. But for more complex bending models with large numbers of triangles, it is impractical to determine them manually. In the case of curved geometries such as spheres or torus, texture coordinates can be calculated algorithmically or mathematically. For models built using modeling tools such as Maya [MA16] or Blender [BL16], these tools offer "UV mapping" capabilities (outside the scope of this book) that make this task easier.

Let's go back to rendering our pyramid, only this time adding a texture with an image of the bricks. We need to specify:
(a) The integer ID referencing the texture image;
(b) The texture coordinates of the model vertices;
(c) Buffer to hold texture coordinates;
(d) Vertex attributes so that the vertex shader can receive and forward texture coordinates through the pipeline;
(e) The texture unit on the graphics card used to save texture objects;
(f) We will soon see the unified sampler variable used to access the texture unit in GLSL. These are described in the next section.

5.3 Create texture objects

Assume that the texture image shown here (shown in Figure 5.3) is stored in a file named "brick1.jpg" [LU16].

Insert image description here

Figure 5.3 Texture image

As shown before, we can load this image by calling the loadTexture() function as follows:
Insert image description here

Recall that texture objects are identified by integer IDs, so brickTexture is of type GLuint.

5.4 Constructing texture coordinates

Our pyramid has 4 triangular sides and a square base at the bottom. Although geometrically it only requires 5 points, we have to use triangles to render it. This requires 4 triangles for the sides and 2 triangles for the square base, for a total of 6 triangles. Each triangle has 3 vertices, for a total of 6×3 = 18 vertices that must be specified in the model.

We have listed the geometric vertices of the pyramid in the floating point array pyramidPositions[ ] in Program 4.3. There are many ways we can position the texture coordinates in order to draw the brick texture onto the pyramid. A simple (albeit imperfect) method is to make the top center of the image correspond to the spire of the pyramid, as shown in Figure 5.4.

Insert image description here

Figure 5.4 The top center of the texture image corresponds to the spire of the pyramid

We can do this for all 4 triangle sides. We also need to draw the square base of the pyramid, which consists of 2 triangles. A simple and logical way is to texture it with the entire area in the picture (the pyramid shown in Figure 5.5 has been tipped backwards, with one side facing down).

Insert image description here

Figure 5.5 Adding texture to the base of the pyramid

Using this very simple strategy for the first nine pyramid vertices in Program 4.3, the corresponding vertex and texture coordinate data sets are shown in Figure 5.6.

Insert image description here

Figure 5.6 Texture coordinates of the pyramid (partial list)

5.5 Load texture coordinates into the buffer

We can load the texture coordinates into the VBO in a similar way to loading the vertices earlier. In setupVertices() we add the following texture coordinate value declaration:

Insert image description here

Then, after creating at least two VBOs (one for vertices and one for texture coordinates), we add the following line of code to load the texture coordinates into VBO #1:
Insert image description here

5.6 Using textures in shaders: sampler variables and texture units

To maximize performance, we want to perform texturing in hardware. This means that our fragment shader needs a way to access the texture objects we create in our C++/OpenGL application. It is implemented through a special GLSL tool called the unified sampler variable. This is a variable that indicates to the texture unit on the graphics card which texels to extract or "sample" from the loaded texture object.

Declaring a sampler variable in a shader is easy - just add it to your uniform variable:
Insert image description here
The name of the variable we declare is "samp". The "layout (binding=0)" part of the declaration specifies that this sampler is associated with texture unit 0.

Texture units (and associated samplers) can be used to sample any texture object you wish, and can be changed at runtime. Your display() function needs to specify the texture object that the texture unit is to sample for the current frame. So every time you draw an object, you need to activate the texture unit and bind it to a specific texture object, for example:

Insert image description here

The number of available texture units depends on the number available on the graphics card. According to the OpenGL API documentation, OpenGL version 4.5 requires at least 16 per shader stage, for a total of at least 80 units across all stages [OP16]. In this example, we make the 0th texture unit active by specifying in the glActiveTexture() call. GL_TEXTURE0

To actually perform texturing, we need to modify the way the fragment shader outputs color. Previously, our fragment shaders either output a fixed color constant or obtained the color from a vertex attribute. Instead, this time we need to sample the texture object using the interpolated texture coordinates received from the vertex shader (via the raster shader), calling the texture() function like this:

Insert image description here

5.7 Texture Mapping: Sample Program

Program 5.1 combines the previously described steps into a single program. The output shows the pyramid mapped with a brick image texture, as shown in Figure 5.7. Two rotations (not shown in the code listing) are added to the pyramid's model matrix to expose the base of the pyramid.
Insert image description here

Figure 5.7 Pyramid mapped using brick image texture

It is now a simple matter to replace the brick texture image with another texture image as needed by changing the filename in the loadTexture() call. For example, if we replace "brick1.jpg" with the image file "ice.jpg" [LU16], we get the result shown in Figure 5.8.
Insert image description here

Figure 5.8 Pyramid after using "ice" image texture mapping

hostShader.glsl

#version 430

layout (location=0) in vec3 pos;
layout (location=1) in vec2 texCoord;//纹理坐标
out vec2 tc;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D samp;//采样器

void main(void)
{
    
    	gl_Position = proj_matrix * mv_matrix * vec4(pos,1.0);
	tc = texCoord;
} 

fragShader.glsl

#version 430

in vec2 tc;
out vec4 color;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D samp;//采样器

void main(void)
{
    
    	color = texture(samp, tc);//实际执行纹理处理,我们需要修改片段着色器输出颜色的方式
}

Procedure 5.1 Pyramid with brick texture

#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <SOIL2\soil2.h>
#include <string>
#include <iostream>
#include <fstream>
#include <cmath>
#include <glm\glm.hpp>
#include <glm\gtc\type_ptr.hpp> // glm::value_ptr
#include <glm\gtc\matrix_transform.hpp> // glm::translate, glm::rotate, glm::scale, glm::perspective
#include "Utils.h"
using namespace std;

#define numVAOs 1
#define numVBOs 2

float cameraX, cameraY, cameraZ;
float pyrLocX, pyrLocY, pyrLocZ;
GLuint renderingProgram;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];

// variable allocation for display
GLuint mvLoc, projLoc;
int width, height;
float aspect;
glm::mat4 pMat, vMat, mMat, mvMat;

GLuint brickTexture;

void setupVertices(void) {
    
    
	float pyramidPositions[54] =
	{
    
     -1.0f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f,    //front
		1.0f, -1.0f, 1.0f, 1.0f, -1.0f, -1.0f, 0.0f, 1.0f, 0.0f,    //right
		1.0f, -1.0f, -1.0f, -1.0f, -1.0f, -1.0f, 0.0f, 1.0f, 0.0f,  //back
		-1.0f, -1.0f, -1.0f, -1.0f, -1.0f, 1.0f, 0.0f, 1.0f, 0.0f,  //left
		-1.0f, -1.0f, -1.0f, 1.0f, -1.0f, 1.0f, -1.0f, -1.0f, 1.0f, //LF
		1.0f, -1.0f, 1.0f, -1.0f, -1.0f, -1.0f, 1.0f, -1.0f, -1.0f  //RR
	};
	float textureCoordinates[36] =
	{
    
     0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 1.0f,
		0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 1.0f,
		0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 1.0f,
		0.0f, 0.0f, 1.0f, 0.0f, 0.5f, 1.0f,
		0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f,
		1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f
	};//顶点的纹理坐标
	glGenVertexArrays(1, vao);
	glBindVertexArray(vao[0]);
	glGenBuffers(numVBOs, vbo);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(pyramidPositions), pyramidPositions, GL_STATIC_DRAW);
	//将纹理坐标载入缓冲区
	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glBufferData(GL_ARRAY_BUFFER, sizeof(textureCoordinates), textureCoordinates, GL_STATIC_DRAW);
}

void init(GLFWwindow* window) {
    
    
	renderingProgram = Utils::createShaderProgram("vertShader.glsl", "fragShader.glsl");
	cameraX = 0.0f; cameraY = 0.0f; cameraZ = 4.0f;
	pyrLocX = 0.0f; pyrLocY = 0.0f; pyrLocZ = 0.0f;
	setupVertices();

	glfwGetFramebufferSize(window, &width, &height);
	aspect = (float)width / (float)height;
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);

	brickTexture = Utils::loadTexture("brick1.jpg");//加载纹理对象
	// SEE Utils.cpp, the "loadTexture()" function, the code before the mipmapping section
}

void display(GLFWwindow* window, double currentTime) {
    
    
	glClear(GL_DEPTH_BUFFER_BIT);
	glClearColor(0.0, 0.0, 0.0, 1.0);
	glClear(GL_COLOR_BUFFER_BIT);

	glUseProgram(renderingProgram);

	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");

	vMat = glm::translate(glm::mat4(1.0f), glm::vec3(-cameraX, -cameraY, -cameraZ));

	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(pyrLocX, pyrLocY, pyrLocZ));

	mMat = glm::rotate(mMat, -0.45f, glm::vec3(1.0f, 0.0f, 0.0f));
	mMat = glm::rotate(mMat,  0.61f, glm::vec3(0.0f, 1.0f, 0.0f));
	mMat = glm::rotate(mMat,  0.00f, glm::vec3(0.0f, 0.0f, 1.0f));

	mvMat = vMat * mMat;

	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);
	//激活纹理单元0并将其绑定到特定的纹理对象
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, brickTexture);

	glEnable(GL_DEPTH_TEST);
	glDepthFunc(GL_LEQUAL);

	glDrawArrays(GL_TRIANGLES, 0, 18);
}

void window_size_callback(GLFWwindow* win, int newWidth, int newHeight) {
    
    
	aspect = (float)newWidth / (float)newHeight;
	glViewport(0, 0, newWidth, newHeight);
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);
}

int main(void) {
    
    
	if (!glfwInit()) {
    
     exit(EXIT_FAILURE); }
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	GLFWwindow* window = glfwCreateWindow(600, 600, "Chapter5 - program1", NULL, NULL);
	glfwMakeContextCurrent(window);
	if (glewInit() != GLEW_OK) {
    
     exit(EXIT_FAILURE); }
	glfwSwapInterval(1);

	glfwSetWindowSizeCallback(window, window_size_callback);

	init(window);

	while (!glfwWindowShouldClose(window)) {
    
    
		display(window, glfwGetTime());
		glfwSwapBuffers(window);
		glfwPollEvents();
	}

	glfwDestroyWindow(window);
	glfwTerminate();
	exit(EXIT_SUCCESS);
}

5.8 Multi-level progressive texture mapping

Texture mapping often produces various undesirable artifacts in rendered images. This is because the resolution or aspect ratio of the texture image rarely matches the resolution or aspect ratio of the area in the scene being texture mapped.

A very common artifact that occurs when the image resolution is smaller than the resolution of the area being drawn. In this case, the image needs to be stretched to cover the entire area, and it will become blurry (and possibly distorted). (Check this out) Depending on the nature of the texture, it is sometimes possible to combat this by changing the way texture coordinates are assigned, Makes the texture require less stretching. Another solution is to use higher resolution texture images.
Insert image description here

The opposite situation is when the resolution of the image texture is greater than the resolution of the area being drawn. It may not be easy to understand why this is a problem, but it is! In this case, noticeable ghosting artifacts may appear, resulting in strange false patterns, or a "flickering" effect in moving objects.
Insert image description here

Ghosting is caused by sampling errors. It is often related to signal processing, where an undersampled signal appears to have different properties (e.g. wavelength) than it actually does when reconstructed. An example is shown in Figure 5.9 (see color insert). The original waveform is shown in red, with yellow dots along the waveform representing sampling points. If sample points are used to reconstruct the waveform, and the sampling frequency is insufficient, a different waveform may be defined (shown in blue).
Insert image description here

Figure 5.9 Overlay caused by insufficient sampling

Similarly, in texture mapping, when high-resolution (and high-detail) images are sparsely sampled (such as when using a uniform sampler variable), the extracted colors will not adequately reflect the actual detail in the image, and may instead appear to be It seems random. If the texture image has a repeating pattern, ghosting can result in a pattern that is different from the original image. If the object being textured is moving, rounding errors in the texel lookup can cause constant changes in the sampled pixels at a given texture coordinate, creating an undesirable flickering effect on the surface of the object being drawn.

Figure 5.10 shows a close-up of an oblique rendering of the top of a cube that was textured using a large, high-resolution checkerboard image.

Insert image description here

Figure 5.10 Overlay in texture map

Aliasing is clearly occurring near the top of the image, and the undersampling of the checkerboard creates a "streaking" effect. Although we can't show it in a still image, if this was an animated scene, the pattern that appears would likely fluctuate between various incorrect patterns (including the one pictured).

Another example is shown in Figure 5.11, where a cube has been textured using an image of the lunar surface [HT16]. At first glance, the image appears sharp and detailed. However, some details in the upper right part of the image are wrong and cause "flickering" when the cube object (or camera) moves. (Unfortunately, we couldn't clearly show the flicker effect in the still image.)
Insert image description here

Figure 5.11 "Flicker" in texture mapping

Sampling error artifacts of this type can be corrected to a large extent using multi-level progressive texture mapping (mipmapping) techniques, which require creating different versions of the texture image at various resolutions. OpenGL then texture maps using the texture image that is best suited to the resolution at the point being processed. Even better, you can use the average color of the texture image at the best resolution for the area being mapped. The results of applying multi-level progressive texture mapping to the images in Figures 5.10 and 5.11 are shown in Figure 5.12.
Insert image description here

Figure 5.12 Multi-level progressive texture mapping results

Multi-level progressive texture mapping works through a clever mechanism that stores a continuous series of lower-resolution copies of the same image in a texture image that is 1/1 larger than the original image. 3. This is achieved by storing the R, G, B of the image in 3 1/4 of the texture image space, and then repeating the same image in the remaining 1/4 of the image space equivalent to 1/4 of the original resolution processing. This subdivision is repeated until the remaining quadrants are too small to contain any useful image data. A sample image and a visualization of the generated multi-level asymptotic texture are shown in Figure 5.13 (see color insert).
Insert image description here

Figure 5.13 Generate multi-level progressive texture for pictures

This method of cramming several images into a small space (only slightly larger than the space required to store the original image) is how Mipmapping gets its name (Click this ). MIP stands for the Latin Multimum In Parvo [WI83], which means "a lot in a small space."

Insert image description here

When actually adding texture to an object, you can sample multi-level progressive textures in a variety of ways. In OpenGL, you can choose how the multi-level fade texture is sampled by setting the L_TEXTURE_MIN_FILTER parameter to the desired reduction method. You can choose one of the following methods.

  • GL_NEAREST_MIPMAP_NEARESTSelects the multilevel texture with the most similar resolution to the texel area. Then it gets the nearest texel of the desired texture coordinate.
  • GL_LINEAR_MIPMAP_NEARESTSelects the multilevel texture with the most similar resolution to the texel area. It then takes the interpolation of the 4 texels closest to the texture coordinate. This is called "linear filtering".
  • GL_NEAREST_MIPMAP_LINEARSelect the 2 multilevel textures with the most similar resolution to the texel area. Then, it gets the nearest texel of texture coordinates from each multi-level asymptotic texture and interpolates them. This is called "bilinear filtering".
  • GL_LINEAR_MIPMAP_LINEARSelect the 2 multilevel textures with the most similar resolution to the texel area. It then takes the 4 texels each closest to the texture coordinate and calculates the interpolation. This is called "trilinear filtering" and is shown in Figure 5.11.

Trilinear filtering is usually a better choice because lower blending levels often produce artifacts, such as visible separation between multi-level progressive texture levels. Figure 5.14 shows a close-up of a chessboard using a multilevel gradient texture with only linear filtering enabled. Note that the vertical lines suddenly change from thick to thin at the boundaries of the multilevel texture (artifacts at the circled locations in the image). In contrast, the example in Figure 5.15 uses trilinear filtering.

Insert image description here

Figure 5.14 Linear filtering artifacts

Insert image description here

Figure 5.15 Trilinear filtering

OpenGL provides rich multi-level progressive texture support. There are mechanisms for building your own multi-level texture levels, or letting OpenGL build them for you. In most cases, OpenGL's automatically built multi-level gradient textures are sufficient. This is accomplished by adding the following line of code to the function (described earlier in Section 5.1) that is executed immediately after the getTextureObject()function:Utils:: loadTexture()

Insert image description here

This tells OpenGL to generate a multi-level gradient texture. Use the glBindTexture() call to activate the brick texture, and then the glTexParameteri() function call to enable one of the previously listed shrinking methods, such as the GL_LINEAR_MIPMAP_LINEAR

After building a multi-level fade texture, the filtering options can be changed by callingglTexParameteri() again (although this is rarely necessary), for example in the display function. You can even disable multilevel textures by selecting GL_NEAREST or GL_LINEAR.

For critical applications, you can build multi-level fade textures yourself using any image editing software you like. Texture objects can then be created by repeatedly calling OpenGL's glTexImage2D() function for each multi-level texture level, and added as multi-level texture levels. Further discussion of this approach is beyond the scope of this book.

5.9 Anisotropic filtering

Multi-level texture maps sometimes look blurrier than non-multi-level texture maps, especially when the mapped object is rendered at a severely skewed perspective. We see an example of this in Figure 5.12, where using multi-level progressive textures reduces artifacts while also reducing image detail (compare to Figure 5.11). (Click this)
Insert image description here

This loss of detail occurs because when an object is tilted, its primitives appear smaller along one axis (i.e., width or height) than along the other axis. When OpenGL maps primitives, it selects a multilevel texture that fits the smaller of the two axes (to avoid "flickering" artifacts).

In Figure 5.12, the surface is tilted away from the viewer, so each render primitive will use a multilevel texture that fits its smaller height, which may be too small a resolution for its width.

One way to recover some of the lost detail is to use anisotropic filtering (AF). Standard multi-level progressive texture mapping samples texture images at various square resolutions (e.g. 256 pixels × 256 pixels, 128 pixels × 128 pixels, etc.), while AF samples textures at various rectangular resolutions, such as 256 pixels × 128 pixels, 64 pixels × 128 pixels, etc. This makes it possible to view from a variety of angles while retaining as much detail as possible in the texture.

Anisotropic filtering is computationally more expensive than standard multi-level progressive texture mapping and is not a required part of OpenGL. However, most graphics cards support AF (this is called an OpenGL extension), and OpenGL does provide a way to query whether a graphics card supports AF, and a way to access AF. Add code immediately after generating the multi-level progressive texture map:
Insert image description here

The call toglewIsSupported() tests whether the graphics card supports AF. If supported, we set it to the maximum supported sampling level, which is obtained using glGetFloatv(). Then use glTexParameterf() to apply it to the active texture object. The results are shown in Figure 5.16. Note that most of the lost detail in Figure 5.11 has been recovered while still eliminating flicker artifacts.
Insert image description here

Figure 5.16 Anisotropic filtering

5.10 Wrap and Tile

So far, we have assumed that texture coordinates fall within the range [0...1]. However, OpenGL actually supports any value of texture coordinates. There are several options for specifying what happens when texture coordinates fall outside the range [0…1]. UseglTexParameteri() to set the desired behavior, with the following options.

  • GL_REPEAT: Ignore the integer part of texture coordinates, producing a repeating or "tiled" pattern. This is the default behavior.
  • GL_MIRRORED_REPEAT: The integer part is ignored, but the coordinates are reversed when the integer part is odd, so the repeated pattern alternates between normal and mirrored.
  • GL_CLAMP_TO_EDGE: Coordinates less than 0 or greater than 1 are set to 0 and 1 respectively.
  • GL_CLAMP_TO_BORDER: Set texels other than [0…1] to the specified border color.

For example, consider a pyramid whose texture coordinates have been defined in the range [0…5] instead of the usual range [0…1]. The default behavior (GL_REPEAT), using the texture image shown earlier in Figure 5.2, will cause the texture to be repeated five times (sometimes called "tiled") across the surface, as shown in Figure 5.17.
Insert image description here

Figure 5.17 Texture coordinates wrapped using GL_REPEAT

To make the appearance of the tiles alternate between normal and mirrored, we can specify the following:

Insert image description here

By replacing GL_MIRRORED_REPEAT with GL_CLAMP_TO_EDGE, you can specify that values ​​less than 0 or greater than 1 are set to 0 and 1 respectively.

You can specify the output "border" color for values ​​less than 0 or greater than 1 as follows:

Insert image description here
Figure 5.18 (see color inset) shows the effect of each option (mirror repeat, clamp to edge, and clamp to frame) separately (from left to right), with texture coordinates ranging from −2 to +3.

Insert image description here

Figure 5.18 Pyramid material map using different wrapping options

In the middle example (clamped to edge), pixels along the edge of the texture image are copied outward. Note that, as a side effect, the lower left and lower right areas of the pyramid face get their colors from the lower left and lower right pixels of the texture image, respectively.

5.11 Perspective distortion

We've seen that when texture coordinates are passed from the vertex shader to the fragment shader, they pass through the raster shader and are interpolated. We also see that this is the result of automatic linear interpolation, which is always performed on vertex attributes.

However, in the case of texture coordinates, linear interpolation can lead to perceptible distortion in 3D scenes with perspective projection.

Consider a rectangle made of two triangles, the texture map is a checkerboard image, facing the camera. When the rectangle is rotated around the X-axis, the top half of the rectangle tilts away from the camera, while the bottom half of the rectangle moves closer to the camera. So we want the top blocks to get smaller and the bottom blocks to get bigger. However, linear interpolation of texture coordinates will result in all squares being of equal height. Distortion is exacerbated along the diagonal between the two triangles that make up the rectangle. The resulting distortion is shown in Figure 5.19.
Insert image description here

Figure 5.19 Texture perspective distortion

Fortunately, algorithms exist for correcting perspective distortion, and by default OpenGL applies perspective correction algorithms during rasterization [OP14]. Figure 5.20 shows the same rotating chessboard rendered correctly by OpenGL.

Insert image description here

Figure 5.20 OpenGL perspective correction

Although uncommon, it is possible to disable OpenGL's perspective correction by adding the keyword "noperspective" to the declaration of a vertex attribute containing texture coordinates. This must be added in both the vertex shader and the fragment shader. For example, a vertex attribute in a vertex shader would be declared as follows:

noperspective out vec2 texCoord;

Corresponding property declaration in fragment shader:

noperspective in vec2 texCoord;

In fact, I used this syntax to generate the twisted checkerboard in Figure 5.19.

5.12 Materials - more OpenGL details

The SOIL2 texture image loading library we use in this book has the advantage of being relatively simple and intuitive to use. However, when learning OpenGL, using SOIL2 will have an undesirable consequence, that is, users will not be exposed to some useful and important OpenGL details. In this section, we describe some of the details that programmers need to know when loading and using textures without a texture loading library such as SOIL2.

Texture image file data can be loaded directly into OpenGL using C++ and OpenGL functions. While it's a bit complicated, it's not uncommon. The general steps are as follows.

(1) Use C++ tools to read image files.

(2) Generate OpenGL texture object.

(3) Copy the image file data to the texture object.

We won’t describe the first step in detail – there are too many ways to do it. A method is well described in opengl-tutorials.org (the specific tutorial page is [OT18]) and uses the C++ functions fopen() and fread() to read data from a .bmp image file into an unsigned char type in the array.

Steps 2 and 3 are more general and mainly involve OpenGL calls. In step 2, we create one or more texture objects using OpenGL's glGenTextures() command. For example, generating a single OpenGL texture object (using an integer reference ID) can be done as follows:

GLuint textureID; // 或者GLuint类型的数组,如果需要创建多于一个纹理对象
glGenTextures(1, &textureID);

In step 3, we associate the image data from step 1 to the texture object created in step 2. This is done using OpenGL's glTexImage2D() command. The following example loads image data from the unsigned char array described in step 1 (here represented as "data") into the texture object created in step 2:

glBindTexture(GL_TEXTURE_2D, textureID)
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, width, height, 0, GL_BGR,GL_UNSIGNED_BYTE, data);

At this point, the various glTexParameteri() calls described earlier in this chapter for setting up multi-level fade texture maps, etc. can also be applied to texture objects. We now also use integer references (textureID) in the same way as described in this chapter.

Guess you like

Origin blog.csdn.net/weixin_44848751/article/details/130877661