Computer graphics and opengl C++ version study notes Chapter 6 3D model

So far we have only dealt with very simple 3D objects such as cubes and pyramids. These objects are so simple that we are able to explicitly list all the vertex information in the source code and put it directly into the buffer.

However, most interesting 3D scenes include objects that are too complex for us to continue building them by hand as we once did. In this chapter, we'll explore more complex object models and how to build and load them into a scene. (Please refer to the introduction of implicit geometry and explicit geometry)

3D modeling itself is a vast field, and what we talk about here is necessarily very limited. We will focus on the following two topics:

  • Build models through programs;
  • Load an externally created model.

While this only touches on a very shallow part of the rich world of 3D modeling, it will allow us to include a variety of complex and realistically detailed objects in our scenes.

6.1 Program construction model - build a sphere

Certain types of objects (such as spheres, cones, etc.) have mathematical definitions that facilitate algorithm generation. For example, for a circle of radius R, the coordinates of points around its circumference can be well defined (see Figure 6.1).

Insert image description here

Figure 6.1 Points forming the circumference of a circle

We can systematically use knowledge of circle geometry to algorithmically model spheres. The strategy is as follows.

  1. The accuracy chosen represents a series of circular "horizontal slices" across the entire sphere. See the left side of Figure 6.2.
  2. Subdivide the circumference of each circular slice into points. See the right side of Figure 6.2. More points and horizontal slices produce a more accurate and smoother sphere model. In our model, each slice will have the same number of points.

Insert image description here

Figure 6.2 Constructing circular vertices
  1. Group vertices into triangles. One approach is to step through the vertices, building two triangles at each step. For example, when we move along the row of 5 colored vertices on the sphere in Figure 6.3, for each of these 5 vertices we construct two triangles shown in the corresponding color (see color inset, more details below describe these steps).

Insert image description here

Figure 6.3 Combining vertices into triangles
  1. Texture coordinates are selected based on the nature of the texture image. In the case of a sphere, there are many terrain texture images. Suppose we choose this texture image. Imagine having this image "wrapped" around the sphere. We can specify texture coordinates for each vertex based on the final corresponding position of the texel in the image. .
  2. For each vertex, you usually also want to generate a normal vector - a vector normal to the surface of the model. We'll use them for lighting soon in Chapter 7.

Determining the normal vector can be tricky, but in the case of a sphere, the vector pointing from the center of the sphere to a vertex is exactly equal to the normal vector of that vertex! Figure 6.4 illustrates this feature (the center of the sphere is represented by a "star").
Insert image description here

Figure 6.4 Sphere vertex normal vector

Some models use indices to define triangles. Note that in Figure 6.3, each vertex appears in multiple triangles, which will cause each vertex to be specified multiple times. Rather than doing this, we would store each vertex once and then assign an index to each corner of the triangle, referencing the desired vertex. We need to store the position, texture coordinates and normal vector of each vertex, so doing this can save memory for large models.

vertices are stored in a one-dimensional array, starting with the vertex in the bottom horizontal slice. When using indexes, the associated index array includes an entry for each triangle corner. Its contents are integer references (specifically, subscripts) in the vertex array. Assume that each slice contains n vertices, an example part of the vertex array and the corresponding index array, as shown in Figure 6.5.
Insert image description here

Figure 6.5 Vertex array and corresponding index array

We can then start at the base of the sphere and traverse the vertices in a circular fashion around each horizontal slice. As we visit each vertex, we construct two triangles forming a square area on the upper right side of it, as shown in Figure 6.3. We organize the entire process into nested loops as shown below.

Insert image description here

For example, consider the "red" vertex in Figure 6.3 (recurring in Figure 6.6). This vertex is located at the lower left of the yellow triangle shown in Figure 6.6. According to the loop we just described, its index number is i*n+j. where i is the slice currently being processed (outer loop), j is the vertex currently being processed in that slice (inner loop), and n is the number of vertices in each slice. Figure 6.6 shows this vertex (red) and its three related adjacent vertices
(see color insert). Each vertex has a formula showing its index number.

Insert image description here

Figure 6.6 The index number of the j-th vertex in the i-th slice (n = the number of vertices in each slice)

These 4 vertices are then used to build the two triangles (shown in yellow) generated for this (red) vertex. The six entries in the index table of these two triangles are represented in the figure in the order of numbers 1 to 6. Note that entries 3 and 6 both point to the same vertex, and the same goes for entries 2 and 4. When we reach the vertex highlighted in red (i.e. vertex[i*n+j]) the two triangles thus defined are made of these 6 vertices - one of the triangles has entries labeled 1, 2, 3 , the referenced vertices include vertex[i *n+j], vertex[i *n+j +1] and vertex[(i+1) *n+j]; the entries of the other triangle are marked 4, 5, 6 , the referenced vertices include vertex[i *n+j +1], vertex[(i+1) *n+j +1] and vertex[(i+1) *n+j].

Program 6.1 shows the implementation of our sphere model, with the class name Sphere. The center of the resulting sphere is at the origin. Code using Sphere is also shown here. Note that each vertex is stored in a C++ vector containing instances of the GLM classes vec2 and vec3 (this is different from the previous example, where the vertices were stored in floating point arrays). vec2 and vec3 include methods to obtain the required floating point values ​​for the x, y, and z components, which can then be placed into floating point buffers as described previously. We store these values ​​in variable-length C++ vectors because the length depends on the number of slices specified at runtime.

Note the calculation of the triangle index in the Sphere class, as described previously in Figure 6.6. The variable "prec(precision)" refers to "precision", which is used here to determine the number of spherical slices and the number of vertices in each slice. Because the texture map wraps completely around the sphere, an additional coincident vertex is required at each point where the left and right edges of the texture map intersect. Therefore, the total number of vertices is (prec+1) * (prec+1). Since each vertex generates 6 triangle indices, the total number of indices is prec * prec * 6.

Program 6.1 Procedurally generated sphere

hostShader.glsl

#version 430

layout (location = 0) in vec3 position;
layout (location = 1) in vec2 tex_coord;
out vec2 tc;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D s;

void main(void)
{
    
    	gl_Position = proj_matrix * mv_matrix * vec4(position,1.0);
	tc = tex_coord;
}

fragShader.glsl

#version 430

in vec2 tc;
out vec4 color;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D s;

void main(void)
{
    
    	color = texture(s,tc);
}

Sphere class (Sphere.cpp)

#include <cmath>
#include <vector>
#include <iostream>
#include <glm\glm.hpp>
#include "Sphere.h"
using namespace std;

Sphere::Sphere() {
    
    
	init(48);
}

Sphere::Sphere(int prec) {
    
    
	init(prec);
}

float Sphere::toRadians(float degrees) {
    
     return (degrees * 2.0f * 3.14159f) / 360.0f; }//精度

void Sphere::init(int prec) {
    
    
	numVertices = (prec + 1) * (prec + 1);
	numIndices = prec * prec * 6;
	for (int i = 0; i < numVertices; i++) {
    
     vertices.push_back(glm::vec3()); }
	for (int i = 0; i < numVertices; i++) {
    
     texCoords.push_back(glm::vec2()); }
	for (int i = 0; i < numVertices; i++) {
    
     normals.push_back(glm::vec3()); }
	for (int i = 0; i < numVertices; i++) {
    
     tangents.push_back(glm::vec3()); }
	for (int i = 0; i < numIndices; i++) {
    
     indices.push_back(0); }

	// 计算三角形顶点
	for (int i = 0; i <= prec; i++) {
    
    
		for (int j = 0; j <= prec; j++) {
    
    
			float y = (float)cos(toRadians(180.0f - i * 180.0f / prec));
			float x = -(float)cos(toRadians(j*360.0f / prec))*(float)abs(cos(asin(y)));
			float z = (float)sin(toRadians(j*360.0f / (float)(prec)))*(float)abs(cos(asin(y)));
			vertices[i*(prec + 1) + j] = glm::vec3(x, y, z);
			texCoords[i*(prec + 1) + j] = glm::vec2(((float)j / prec), ((float)i / prec));
			normals[i*(prec + 1) + j] = glm::vec3(x, y, z);

			// 计算切向量
			if (((x == 0) && (y == 1) && (z == 0)) || ((x == 0) && (y == -1) && (z == 0))) {
    
    
				tangents[i*(prec + 1) + j] = glm::vec3(0.0f, 0.0f, -1.0f);
			}
			else {
    
    
				tangents[i*(prec + 1) + j] = glm::cross(glm::vec3(0.0f, 1.0f, 0.0f), glm::vec3(x, y, z));
			}
		}
	}
	// 计算三角形索引
	for (int i = 0; i<prec; i++) {
    
    
		for (int j = 0; j<prec; j++) {
    
    
			indices[6 * (i*prec + j) + 0] = i*(prec + 1) + j;
			indices[6 * (i*prec + j) + 1] = i*(prec + 1) + j + 1;
			indices[6 * (i*prec + j) + 2] = (i + 1)*(prec + 1) + j;
			indices[6 * (i*prec + j) + 3] = i*(prec + 1) + j + 1;
			indices[6 * (i*prec + j) + 4] = (i + 1)*(prec + 1) + j + 1;
			indices[6 * (i*prec + j) + 5] = (i + 1)*(prec + 1) + j;
		}
	}
}

int Sphere::getNumVertices() {
    
     return numVertices; }
int Sphere::getNumIndices() {
    
     return numIndices; }
std::vector<int> Sphere::getIndices() {
    
     return indices; }
std::vector<glm::vec3> Sphere::getVertices() {
    
     return vertices; }
std::vector<glm::vec2> Sphere::getTexCoords() {
    
     return texCoords; }
std::vector<glm::vec3> Sphere::getNormals() {
    
     return normals; }
std::vector<glm::vec3> Sphere::getTangents() {
    
     return tangents; }

Sphere class (Sphere.h)

#include <cmath>
#include <vector>
#include <glm\glm.hpp>
class Sphere
{
    
    
private:
	int numVertices;
	int numIndices;
	std::vector<int> indices;
	std::vector<glm::vec3> vertices;
	std::vector<glm::vec2> texCoords;
	std::vector<glm::vec3> normals;
	std::vector<glm::vec3> tangents;
	void init(int);
	float toRadians(float degrees);

public:
	Sphere();
	Sphere(int prec);
	int getNumVertices();
	int getNumIndices();
	std::vector<int> getIndices();
	std::vector<glm::vec3> getVertices();
	std::vector<glm::vec2> getTexCoords();
	std::vector<glm::vec3> getNormals();
	std::vector<glm::vec3> getTangents();
};

main.cpp

#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <SOIL2\soil2.h>
#include <string>
#include <iostream>
#include <fstream>
#include <cmath>
#include <glm\glm.hpp>
#include <glm\gtc\type_ptr.hpp> // glm::value_ptr
#include <glm\gtc\matrix_transform.hpp> // glm::translate, glm::rotate, glm::scale, glm::perspective
#include "Sphere.h"
#include "Utils.h"
using namespace std;

#define numVAOs 1
#define numVBOs 3

float cameraX, cameraY, cameraZ;
float sphLocX, sphLocY, sphLocZ;
GLuint renderingProgram;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];
GLuint earthTexture;
float rotAmt = 0.0f;

// variable allocation for display
GLuint mvLoc, projLoc;
int width, height;
float aspect;
glm::mat4 pMat, vMat, mMat, mvMat;

Sphere mySphere = Sphere(48); //球形切片的数量和每个切片中的顶点数量 顶点的总数是(48+1) * (48+1) 索引的总数是prec * prec * 6

void setupVertices(void) {
    
    
	std::vector<int> ind = mySphere.getIndices();
	std::vector<glm::vec3> vert = mySphere.getVertices();
	std::vector<glm::vec2> tex = mySphere.getTexCoords();
	std::vector<glm::vec3> norm = mySphere.getNormals();

	std::vector<float> pvalues;
	std::vector<float> tvalues;
	std::vector<float> nvalues;

	int numIndices = mySphere.getNumIndices();
	for (int i = 0; i < numIndices; i++) {
    
    
		pvalues.push_back((vert[ind[i]]).x);
		pvalues.push_back((vert[ind[i]]).y);
		pvalues.push_back((vert[ind[i]]).z);
		tvalues.push_back((tex[ind[i]]).s);
		tvalues.push_back((tex[ind[i]]).t);
		nvalues.push_back((norm[ind[i]]).x);
		nvalues.push_back((norm[ind[i]]).y);
		nvalues.push_back((norm[ind[i]]).z);
	}

	glGenVertexArrays(1, vao);
	glBindVertexArray(vao[0]);
	glGenBuffers(numVBOs, vbo);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glBufferData(GL_ARRAY_BUFFER, pvalues.size()*4, &pvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glBufferData(GL_ARRAY_BUFFER, tvalues.size()*4, &tvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glBufferData(GL_ARRAY_BUFFER, nvalues.size()*4, &nvalues[0], GL_STATIC_DRAW);
}

void init(GLFWwindow* window) {
    
    
	renderingProgram = Utils::createShaderProgram("vertShader.glsl", "fragShader.glsl");
	cameraX = 0.0f; cameraY = 0.0f; cameraZ = 2.0f;
	sphLocX = 0.0f; sphLocY = 0.0f; sphLocZ = -1.0f;

	glfwGetFramebufferSize(window, &width, &height);
	aspect = (float)width / (float)height;
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);

	setupVertices();
	earthTexture = Utils::loadTexture("earth.jpg");
}

void display(GLFWwindow* window, double currentTime) {
    
    
	glClear(GL_DEPTH_BUFFER_BIT);
	glClearColor(0.0, 0.0, 0.0, 1.0);
	glClear(GL_COLOR_BUFFER_BIT);

	glUseProgram(renderingProgram);

	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");

	vMat = glm::translate(glm::mat4(1.0f), glm::vec3(-cameraX, -cameraY, -cameraZ));
	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(sphLocX, sphLocY, sphLocZ));
	mvMat = vMat * mMat;

	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, earthTexture);

	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);

	glDrawArrays(GL_TRIANGLES, 0, mySphere.getNumIndices());
}

void window_size_callback(GLFWwindow* win, int newWidth, int newHeight) {
    
    
	aspect = (float)newWidth / (float)newHeight;
	glViewport(0, 0, newWidth, newHeight);
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);
}

int main(void) {
    
    
	if (!glfwInit()) {
    
     exit(EXIT_FAILURE); }
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	GLFWwindow* window = glfwCreateWindow(600, 600, "Chapter6 - program1", NULL, NULL);
	glfwMakeContextCurrent(window);
	if (glewInit() != GLEW_OK) {
    
     exit(EXIT_FAILURE); }
	glfwSwapInterval(1);

	glfwSetWindowSizeCallback(window, window_size_callback);

	init(window);

	while (!glfwWindowShouldClose(window)) {
    
    
		display(window, glfwGetTime());
		glfwSwapBuffers(window);
		glfwPollEvents();
	}

	glfwDestroyWindow(window);
	glfwTerminate();
	exit(EXIT_SUCCESS);
}

When using the Sphere class, you need 3 values ​​for each vertex position and normal vector, but only two values ​​for each texture coordinate. This is reflected in the declaration of the vectors (vertices, texCoords and normals) shown in the Sphere.h file from which the data is later loaded into the buffer.

It is worth noting that although indexes are used during the construction of the sphere, the final sphere vertex data stored in the VBO does not use indexes. Instead, when setupVertices() loops through the sphere index, it generates separate (often redundant) vertex entries in the VBO for each index entry. OpenGL does have a mechanism for indexing vertex data; for simplicity, we don't use it in this example, but we will use OpenGL's indexing in the next example.

From geometric shapes to real-world objects, many other models can be created using the program. One of the most famous is the "Utah Teapot" [CH16], developed in 1975 by Martin Newell, using various Bezier curves and surfaces. The OpenGL Utility Toolkit (or "GLUT") [GL16] even includes a program for drawing teapots (see Figure 6.7). We don't cover GLUT in this book, but Bezier surfaces are covered in Chapter 11.

Insert image description here

Figure 6.7 OpenGL GLUT teapot

6.2 OpenGL index - building a torus

6.2.1 Torus

Algorithms for generating torus can be found on various websites. Paul Baker describes step by step the method of defining a circular slice and then rotating the slice around the circle to form a torus [PP07]. Figure 6.8 shows two views, side and top.
Insert image description here

Figure 6.8 Constructing a torus

The way you generate the vertex positions of a torus is very different from the way you build a sphere. For a torus, the algorithm positions a vertex to the right of the origin and then rotates this vertex around the Z-axis in a circle on the XY plane to form a "ring." Then, move the ring "outward" by the "inner radius" distance. When building these vertices, texture coordinates and normal vectors are calculated for each vertex. A vector tangent to the torus surface (called a tangent vector) is additionally generated for each vertex.

Rotating the initial ring around the Y-axis forms the vertices of the other rings used to form the torus. The axis rotates the tangent and normal vectors of the original ring to calculate the tangent and normal vectors of each resulting vertex. After the vertices are created, all vertices are traversed ring by ring and for each vertex two triangles are generated. The six index table entries for the two triangles are generated in a similar way to the previous sphere.

Our strategy for selecting texture coordinates for the remaining rings is to arrange them so that the S-axis of the texture image surrounds half of the horizontal perimeter of the torus, and then repeat for the other half. When we generate the ring by rotating around the Y axis, we specify a variable ring that starts at 1 and increases to the specified precision (called "prec" again). We then set the S texture coordinate value to ring*2.0/prec so that S ranges between 0.0 and 2.0, and then subtract 1.0 whenever the texture coordinate is greater than 1.0. The motivation for this approach is to avoid excessive "stretching" of the texture image in the horizontal direction. Conversely, if we really want the texture to stretch completely around the torus, we simply remove the "*2.0" multiplier from the texture coordinate calculation.

Building Torus classes in C++/OpenGL can be done in almost exactly the same way as Sphere classes. However, we have the opportunity to take advantage of OpenGL's support for vertex indexing to take advantage of the indices we create when building the torus (we could also do this for the sphere, but we didn't). For very large models with thousands of vertices, using OpenGL indexes can improve performance, so we will describe how to do this.

6.2.2 Indexing in OpenGL

In our sphere and torus models, we generate an integer-indexed array that references the vertex array. In the case of a sphere, we use an indexed list to build a complete set of individual vertices and load them into a VBO, just like we did in the example from the previous chapter. Instantiating the torus and loading its vertices, normal vectors, etc. into a buffer can be done in a similar way to Program 6.1, but we will use OpenGL's indexing.

When using an OpenGL index, we also need to load the index itself into the VBO. We generate an additional VBO to save the index. Since each index value is just an integer reference, we first copy the index array into a C++ vector of integers, and then use glBufferData() to load the vector into the new In the added VBO, specify the VBO type as GL_ELEMENT_ARRAY_BUFFER (this will tell OpenGL that this VBO contains an index). Code to do this can be added tosetupVertices():
Insert image description here

In the display() method, we replace the glDrawArrays() call with the glDrawElements() call, which tells OpenGL to use the index VBO to find the vertex to draw. We also enable the VBO containing the index using glBindBuffer(), specifying which VBO contains the index and is GL_ELEMENT_ARRAY_BUFFER类型. The code is as follows:

Insert image description here

Interestingly, even though we made changes in the C++/OpenGL application to implement indexing, the shader used to draw the sphere continued to work for the torus without modification. OpenGL is able to recognize the presence of GL_ELEMENT_ARRAY_BUFFER and use it to access vertex attributes.

Program 6.2 shows a class called Torus implemented based on Baker. The "inner" and "outer" variables refer to the corresponding inner and outer radii in Figure 6.9. The prec variable has a similar effect to that of a sphere, with similar calculations of the number of vertices and indexes. In contrast, determining the normal vector is much more complicated than using a sphere. We used the strategy given in Baker's description, in which two tangent vectors (called sTangent and tTangent by Baker, although usually called "tangent" and "bitangent") are calculated, their The cross product forms the normal vector.

We will use this torus class (along with the sphere class described earlier) in many examples throughout the rest of the book.

Program 6.2 Procedurally generated torus
vertShader.glsl

#version 430

layout (location = 0) in vec3 position;
layout (location = 1) in vec2 tex_coord;
out vec2 tc;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D s;

void main(void)
{
    
    	gl_Position = proj_matrix * mv_matrix * vec4(position,1.0);
	tc = tex_coord;
}

fragShader.glsl

#version 430

in vec2 tc;
out vec4 color;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D s;

void main(void)
{
    
    
	color = texture(s,tc);
}

Torus.h

#include <cmath>
#include <vector>
#include <glm\glm.hpp>
class Torus
{
    
    
private:
	int numVertices;
	int numIndices;
	int prec;
	float inner;
	float outer;
	std::vector<int> indices;
	std::vector<glm::vec3> vertices;
	std::vector<glm::vec2> texCoords;
	std::vector<glm::vec3> normals;
	std::vector<glm::vec3> sTangents;
	std::vector<glm::vec3> tTangents;
	void init();
	float toRadians(float degrees);

public:
	Torus();
	Torus(float inner, float outer, int prec);
	int getNumVertices();
	int getNumIndices();
	std::vector<int> getIndices();
	std::vector<glm::vec3> getVertices();
	std::vector<glm::vec2> getTexCoords();
	std::vector<glm::vec3> getNormals();
	std::vector<glm::vec3> getStangents();
	std::vector<glm::vec3> getTtangents();
};

Torus.cpp

#include <cmath>
#include <vector>
#include <iostream>
#include <glm\gtc\matrix_transform.hpp>
#include "Torus.h"
using namespace std;

Torus::Torus() {
    
    
	prec = 48;//精度
	inner = 0.5f;//内半径
	outer = 0.2f;//外半径
	init();
}

Torus::Torus(float in, float out, int precIn) {
    
    
	prec = precIn;
	inner = in;
	outer = out;
	init();
}

float Torus::toRadians(float degrees) {
    
     return (degrees * 2.0f * 3.14159f) / 360.0f; }

void Torus::init() {
    
    
	numVertices = (prec + 1) * (prec + 1);
	numIndices = prec * prec * 6;
	for (int i = 0; i < numVertices; i++) {
    
     vertices.push_back(glm::vec3()); }
	for (int i = 0; i < numVertices; i++) {
    
     texCoords.push_back(glm::vec2()); }
	for (int i = 0; i < numVertices; i++) {
    
     normals.push_back(glm::vec3()); }
	for (int i = 0; i < numVertices; i++) {
    
     sTangents.push_back(glm::vec3()); }
	for (int i = 0; i < numVertices; i++) {
    
     tTangents.push_back(glm::vec3()); }
	for (int i = 0; i < numIndices; i++) {
    
     indices.push_back(0); }

	//计算第一个圆环
	for (int i = 0; i < prec + 1; i++) {
    
    
		float amt = toRadians(i*360.0f / prec);

		glm::mat4 rMat = glm::rotate(glm::mat4(1.0f), amt, glm::vec3(0.0f, 0.0f, 1.0f));
		glm::vec3 initPos(rMat * glm::vec4(outer, 0.0f, 0.0f, 1.0f));

		vertices[i] = glm::vec3(initPos + glm::vec3(inner, 0.0f, 0.0f));
		// 为环上的每个顶点计算纹理坐标
		texCoords[i] = glm::vec2(0.0f, ((float)i / (float)prec));

		rMat = glm::rotate(glm::mat4(1.0f), amt, glm::vec3(0.0f, 0.0f, 1.0f));
		//计算了两个切向量(Baker称为sTangent和tTangent,尽管通常称为“切向量(tangent)”和“副切向量(bitangent)”),它们的叉乘积形成法向量
		tTangents[i] = glm::vec3(rMat * glm::vec4(0.0f, -1.0f, 0.0f, 1.0f));// 计算切向量和法向量,第一个切向量是绕Z轴旋转的Y轴

		sTangents[i] = glm::vec3(glm::vec3(0.0f, 0.0f, -1.0f));// 第二个切向量是 -Z 轴
		normals[i] = glm::cross(tTangents[i], sTangents[i]);// 它们的叉乘积就是法向量
	}
	// 绕原点旋转点,形成环,然后将它们向外移动
	for (int ring = 1; ring < prec + 1; ring++) {
    
    
		for (int i = 0; i < prec + 1; i++) {
    
    
			// 绕Y轴旋转最初那个环的顶点坐标
			float amt = (float)toRadians((float)ring * 360.0f / (prec));

			glm::mat4 rMat = glm::rotate(glm::mat4(1.0f), amt, glm::vec3(0.0f, 1.0f, 0.0f));
			vertices[ring*(prec + 1) + i] = glm::vec3(rMat * glm::vec4(vertices[i], 1.0f));
			// 计算新环顶点的纹理坐标
			texCoords[ring*(prec + 1) + i] = glm::vec2((float)ring*2.0f / (float)prec, texCoords[i].t);
			if (texCoords[ring*(prec + 1) + i].s > 1.0) texCoords[ring*(prec+1)+i].s -= 1.0f;
			// 绕Y轴旋转切向量和副切向量
			rMat = glm::rotate(glm::mat4(1.0f), amt, glm::vec3(0.0f, 1.0f, 0.0f));
			sTangents[ring*(prec + 1) + i] = glm::vec3(rMat * glm::vec4(sTangents[i], 1.0f));
			// 绕Y轴旋转法向量
			rMat = glm::rotate(glm::mat4(1.0f), amt, glm::vec3(0.0f, 1.0f, 0.0f));
			tTangents[ring*(prec + 1) + i] = glm::vec3(rMat * glm::vec4(tTangents[i], 1.0f));
			
			rMat = glm::rotate(glm::mat4(1.0f), amt, glm::vec3(0.0f, 1.0f, 0.0f));
			normals[ring*(prec + 1) + i] = glm::vec3(rMat * glm::vec4(normals[i], 1.0f));
		}
	}
	// 按照逐个顶点的两个三角形,计算三角形索引
	for (int ring = 0; ring < prec; ring++) {
    
    
		for (int i = 0; i < prec; i++) {
    
    
			indices[((ring*prec + i) * 2) * 3 + 0] = ring*(prec + 1) + i;
			indices[((ring*prec + i) * 2) * 3 + 1] = (ring + 1)*(prec + 1) + i;
			indices[((ring*prec + i) * 2) * 3 + 2] = ring*(prec + 1) + i + 1;
			indices[((ring*prec + i) * 2 + 1) * 3 + 0] = ring*(prec + 1) + i + 1;
			indices[((ring*prec + i) * 2 + 1) * 3 + 1] = (ring + 1)*(prec + 1) + i;
			indices[((ring*prec + i) * 2 + 1) * 3 + 2] = (ring + 1)*(prec + 1) + i + 1;
		}
	}
}
// 环面索引和顶点的访问函数
int Torus::getNumVertices() {
    
     return numVertices; }
int Torus::getNumIndices() {
    
     return numIndices; }
std::vector<int> Torus::getIndices() {
    
     return indices; }
std::vector<glm::vec3> Torus::getVertices() {
    
     return vertices; }
std::vector<glm::vec2> Torus::getTexCoords() {
    
     return texCoords; }
std::vector<glm::vec3> Torus::getNormals() {
    
     return normals; }
std::vector<glm::vec3> Torus::getStangents() {
    
     return sTangents; }
std::vector<glm::vec3> Torus::getTtangents() {
    
     return tTangents; }

main.cpp

#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <SOIL2\soil2.h>
#include <string>
#include <iostream>
#include <fstream>
#include <glm\gtc\type_ptr.hpp> // glm::value_ptr
#include <glm\gtc\matrix_transform.hpp> // glm::translate, glm::rotate, glm::scale, glm::perspective
#include "Torus.h"
#include "Utils.h"
using namespace std;

float toRadians(float degrees) {
    
     return (degrees * 2.0f * 3.14159f) / 360.0f; }

#define numVAOs 1
#define numVBOs 4

float cameraX, cameraY, cameraZ;
float torLocX, torLocY, torLocZ;
GLuint renderingProgram;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];
GLuint brickTexture;
float rotAmt = 0.0f;

// variable allocation for display
GLuint mvLoc, projLoc;
int width, height;
float aspect;
glm::mat4 pMat, vMat, mMat, mvMat;

Torus myTorus(0.5f, 0.2f, 48);

void setupVertices(void) {
    
    
	std::vector<int> ind = myTorus.getIndices();//将索引数组复制到整型的C++向量
	std::vector<glm::vec3> vert = myTorus.getVertices();
	std::vector<glm::vec2> tex = myTorus.getTexCoords();
	std::vector<glm::vec3> norm = myTorus.getNormals();

	std::vector<float> pvalues;
	std::vector<float> tvalues;
	std::vector<float> nvalues;

	for (int i = 0; i < myTorus.getNumVertices(); i++) {
    
    
		pvalues.push_back(vert[i].x);
		pvalues.push_back(vert[i].y);
		pvalues.push_back(vert[i].z);
		tvalues.push_back(tex[i].s);
		tvalues.push_back(tex[i].t);
		nvalues.push_back(norm[i].x);
		nvalues.push_back(norm[i].y);
		nvalues.push_back(norm[i].z);
	}
	glGenVertexArrays(1, vao);
	glBindVertexArray(vao[0]);
	glGenBuffers(numVBOs, vbo);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glBufferData(GL_ARRAY_BUFFER, pvalues.size() * 4, &pvalues[0], GL_STATIC_DRAW);//这里为什么要*4

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glBufferData(GL_ARRAY_BUFFER, tvalues.size() * 4, &tvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glBufferData(GL_ARRAY_BUFFER, nvalues.size() * 4, &nvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[3]); //使用OpenGL索引时,需要将索引本身加载到VBO中。生成一个额外的VBO用于保存索引
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, ind.size() * 4, &ind[0], GL_STATIC_DRAW);//将向量加载到新增的VBO 指定VBO的类型为 `GL_ELEMENT_ARRAY_BUFFER`(告诉OpenGL这个VBO包含索引)
}

void init(GLFWwindow* window) {
    
    
	renderingProgram = Utils::createShaderProgram("vertShader.glsl", "fragShader.glsl");
	cameraX = 0.0f; cameraY = 0.0f; cameraZ = 2.0f;
	torLocX = 0.0f; torLocY = 0.0f; torLocZ = -0.5f;

	glfwGetFramebufferSize(window, &width, &height);
	aspect = (float)width / (float)height;
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);

	setupVertices();
	brickTexture = Utils::loadTexture("brick1.jpg");
}

void display(GLFWwindow* window, double currentTime) {
    
    
	glClear(GL_DEPTH_BUFFER_BIT);
	glClearColor(0.0, 0.0, 0.0, 1.0);
	glClear(GL_COLOR_BUFFER_BIT);

	glUseProgram(renderingProgram);

	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");

	vMat = glm::translate(glm::mat4(1.0f), glm::vec3(-cameraX, -cameraY, -cameraZ));
	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(torLocX, torLocY, torLocZ));
	//mMat *= glm::eulerAngleXYZ(toRadians(30.0f), 0.0f, 0.0f);
	mMat = glm::rotate(mMat, toRadians(30.0f), glm::vec3(1.0f, 0.0f, 0.0f));

	mvMat = vMat * mMat;

	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, brickTexture);

	glEnable(GL_CULL_FACE);
	glFrontFace(GL_CCW);

	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbo[3]);//启用包含索引的VBO,指定哪个VBO包含索引并且是`GL_ELEMENT_ARRAY_BUFFER类型
	glDrawElements(GL_TRIANGLES, myTorus.getIndices().size(), GL_UNSIGNED_INT, 0);//将`glDrawArrays()`调用替换为 `glDrawElements()`调用 告诉OpenGL利用索引VBO来查找要绘制的顶点
}

void window_size_callback(GLFWwindow* win, int newWidth, int newHeight) {
    
    
	aspect = (float)newWidth / (float)newHeight;
	glViewport(0, 0, newWidth, newHeight);
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);
}

int main(void) {
    
    
	if (!glfwInit()) {
    
     exit(EXIT_FAILURE); }
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	GLFWwindow* window = glfwCreateWindow(800, 800, "Chapter6 - program2", NULL, NULL);
	glfwMakeContextCurrent(window);
	if (glewInit() != GLEW_OK) {
    
     exit(EXIT_FAILURE); }
	glfwSwapInterval(1);

	glfwSetWindowSizeCallback(window, window_size_callback);

	init(window);

	while (!glfwWindowShouldClose(window)) {
    
    
		display(window, glfwGetTime());
		glfwSwapBuffers(window);
		glfwPollEvents();
	}

	glfwDestroyWindow(window);
	glfwTerminate();
	exit(EXIT_SUCCESS);
}

Note that in code using the Torus class, the loop in setupVertices() now only stores the data associated with each vertex once, rather than once per index entry (as is the case in the sphere example) . This difference is also reflected in the array declaration size of the data to be entered into the VBO. Also note that in the torus example, instead of using the index values ​​when retrieving the vertex data, they are simply loaded directly into VBO #3. Since this VBO is specified as GL_ELEMENT_ARRAY_BUFFER, OpenGL knows that this VBO contains vertex indices.

Figure 6.9 shows the result of instantiating a torus and texturing it with a brick texture.

Insert image description here

Figure 6.9 Procedurally generated torus

6.3 Loading externally built models

Complex 3D models, such as characters in video games or computer-generated movies, are often generated using modeling tools. This "DCC" (digital content creation) tool enables people (such as artists) to construct arbitrary shapes in 3D space and automatically generate vertices, texture coordinates, vertex normal vectors, etc. There are too many such tools to list them all here, a few examples are MAYA, Blender, Lightwave, Cinema4D, etc. Blender is free and open source. Figure 6.10 shows an example of a Blender screen when editing a 3D model.
Insert image description here

Figure 6.10 Blender model creation example [BL16]

In order for the model we create using the DCC tool to work in an OpenGL scene, the model needs to be saved (exported) in a format that we can read (import) into our program. There are several standard 3D model file formats; again, too many to list, some examples are Wavefront (.obj), 3D Studio Max (.3ds), Stanford Scan Repository (.ply), Ogre3D (.mesh) for reference. The simplest of them is Wavefront (often called OBJ), so we'll cover that in detail.

OBJ files are simple and we can develop a basic importer relatively easily. In an OBJ file, vertex geometry data, texture coordinates, normal vectors, and other information are specified as lines of text. It has some limitations - for example, OBJ files cannot specify model animations.

Lines in an OBJ file, beginning with a character mark indicating the type of data on that line. Some common tags include:

  • v-geometry (vertex position) data;
  • vt-texture coordinates;
  • vn-vertex normal vector;
  • f-face (usually a vertex in a triangle).

There are other tags that can be used to store object names, materials used, curves, shadows, and many other details. We only discuss the 4 tags listed above here, which are enough to import various complex models.

Let's say we use Blender to build a simple pyramid, such as the one we developed for Program 4.3. Figure 6.11 is a screenshot of a similar pyramid created in Blender.

Insert image description here

Figure 6.11 Pyramid built in Blender

In Blender, if we now export our pyramid model, specify the .obj format, and set Blender to output texture coordinates and vertex normal vectors, an OBJ file will be created containing all this information. The generated OBJ file is shown in Figure 6.12. (The actual values ​​of texture coordinates may vary depending on how the model is built.)
Insert image description here

Figure 6.12 OBJ file exported by Pyramid

We have color-coded important parts of the OBJ file for reference. The lines starting with "#" at the top are comments placed by Blender and are ignored by our importer. Next are the lines starting with "o" giving the name of the object. Our importer can also ignore this line. After that, there is a line starting with "s" specifying that the surface should not be smoothed. Our code will also ignore lines starting with "s".

The first part of the OBJ file with actual content lines are those starting with "v" (line 4 to line 8). They specify the X Y and Z local space coordinates of the 5 vertices of the pyramid model relative to the origin. Here, the origin is at the center of the pyramid.

The red values ​​(starting with "vt") are various texture coordinates. The reason the texture coordinate list is longer than the vertex list is that some vertices participate in more than one triangle, and different texture coordinates may be used in these cases.

The green values ​​(starting with "vn") are the various normal vectors. This list is also typically longer than the vertex list (although not in this example), again because some vertices participate in more than one triangle, and different normal vectors may be used in those cases.

Values ​​marked purple near the bottom of the file (starting with "f") specify triangles (i.e. "faces"). In this example, each face (triangle) has 3 elements, each element has 3 values ​​separated by "/" (OBJ allows other formats as well). The value of each element is the index of the vertex list, texture coordinates, and normal vector respectively.

For example,The third face is: f 2 / 7 / 3 5 / 8 / 3 3 / 9 / 3 This indicates that the 2nd, 5th in the vertex list and the 3rd vertex (blue) form a triangle (note that the OBJ index starts at 1). The corresponding texture coordinates are items 7, 8, and 9 in the texture coordinate list in the red section. All 3 vertices have the same normal vector, which is the 3rd item in the normal vector list shown in green.

Models in OBJ format are not required to have normal vectors or even texture coordinates. If the model has no texture coordinates or normal vectors, the face's numeric value will only specify the vertex index:

f 2 5 3

If the model has texture coordinates but no normal vectors, the format is as follows:

f 2 / 7 5 / 8 3 / 9

And, if the model has normal vectors but no texture coordinates, the format is:
f 2 / / 3 5 / / 3 3 / / 3

It is not uncommon for models to have tens of thousands of vertices. Hundreds of these models are available for download on the Internet for virtually every application imaginable, including animals, buildings, cars, airplanes, mythical creatures, people, and more.

There are various import programs available on the Internet that can import OBJ models of varying complexity. It's not difficult to write a very simple OBJ loader function that handles the basic tags we saw (v, vt, vn, and f). Program 6.3 shows one such loader, albeit one with very limited functionality. It contains a class to hold any imported model, which in turn calls the importer.

Before we describe the code for the simple OBJ importer, we must warn the reader of its limitations.

  • It only supports models containing all 3 face attribute fields. That is to say, the vertex position, texture coordinates and normal vector must all exist in the form f #/#/# #/#/# #/#/#.
  • Material tags are ignored - texturing must be done using the method described in Chapter 5.
  • Only OBJ models consisting of a single triangle mesh are supported (the OBJ format supports composite models, but our simple importer does not) (composite model loaders can refer to this one).
  • Assume that the elements on each line are separated by only one space.

https://blog.csdn.net/hyy_sui_yuan/article/details/104885628?spm=1001.2014.3001.5506
https://blog.csdn.net/qinyuanpei/article/details/49991607?spm=1001.2014.3001.5506

If your OBJ model does not meet all the above criteria and you wish to import it using the simple loader in Program 6.3, this may still be possible. It is usually possible to load a model like this into Blender and then export it to another OBJ file that meets the loader's constraints. For example, if the model does not contain normal vectors, you can have Blender generate them when exporting the modified OBJ file.

Another limitation of our OBJ loader has to do with indexing. As mentioned in the previous description, the "f" tag allows the possibility to mix and match vertex positions, texture coordinates and normal vectors. For example, two different "face" rows may contain indexes pointing to the same v entry but different vt entries. Unfortunately, OpenGL's indexing mechanism does not support this flexibility - index entries in OpenGL can only point to specific vertices and their attributes. This complicates writing the OBJ model loader to some extent, since we cannot simply copy the reference from the triangle face entry into the index array. In contrast, using OpenGL indexing requires ensuring that the entire combination of noodle v, vt, and vn values ​​has its own reference in the index array. A simpler but less efficient alternative is to create a new vertex for each triangle face entry. Although using OpenGL indexes has the advantage of saving space (especially when loading larger models), we choose this simpler approach for the sake of clarity.

The ModelImporter class contains a parseOBJ() function, which reads the OBJ file line by line and handles the four cases of v, vt, vn and f respectively. In each case, subsequent numbers on the line are extracted, first using erase() to skip the initial v, vt, vn, or f characters, and then using the ">>" operator of the C++ stringstream class to extract each subsequent argument values ​​and then store them in a C++ floating point vector. When processing face (f) entries, the vertices are constructed using the corresponding entries in a C++ floating point vector, including vertex position, texture coordinates, and normal vector.

The ModelImporter class and the ImportedModel class are included in the same file. The ImportedModel class simplifies the process of loading and accessing OBJ file vertices by placing the imported vertices into a vector of vec2 and vec3 objects. Recall these GLM classes; we use them here to store vertex positions, texture coordinates, and normal vectors. The read functions in the ImportedModel class then make them available to C++/OpenGL applications in the same way as in the Sphere and Torus classes.

Following the ModelImporter and ImportedModel classes are a series of example calls that load an OBJ file and then transfer the vertex information to a set of VBOs for subsequent rendering.

Figure 6.13 shows a space shuttle rendering model in OBJ format downloaded from the NASA website [NA16], imported using the code in Program 6.3, and rendered using the code in Program 5.1 and the corresponding NASA texture image file with anisotropic filtering Texturing. This texture image is an example of using UV mapping, where texture coordinates in the model are carefully mapped to specific areas of the texture image. (As discussed in Chapter 5, the details of UV mapping are beyond the scope of this book.)
Insert image description here

Figure 6.13 NASA space shuttle model with texture

Program 6.3 Simplified (restricted) OBJ loader
vertShader.glsl

#version 430

layout (location = 0) in vec3 position;
layout (location = 1) in vec2 tex_coord;
out vec2 tc;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D s;

void main(void)
{
    
    	gl_Position = proj_matrix * mv_matrix * vec4(position,1.0);
	tc = tex_coord;
}

fragShader.glsl

#version 430

in vec2 tc;
out vec4 color;

uniform mat4 mv_matrix;
uniform mat4 proj_matrix;
layout (binding=0) uniform sampler2D s;

void main(void)
{
    
    
	color = texture(s,tc);
}

ImportedModel.h

#include <vector>
//通过将导入的顶点放入vec2和vec3对象的向量中,简化了加载和访问OBJ文件顶点的过程
class ImportedModel
{
    
    
private:
	// 从OBJ文件读取的数值
	int numVertices;
	// 保存为顶点属性以供后续使用的数值
	std::vector<glm::vec3> vertices;
	std::vector<glm::vec2> texCoords;
	std::vector<glm::vec3> normalVecs;
public:
	ImportedModel();
	ImportedModel(const char *filePath);
	int getNumVertices();
	std::vector<glm::vec3> getVertices();
	std::vector<glm::vec2> getTextureCoords();
	std::vector<glm::vec3> getNormals();
};

class ModelImporter
{
    
    
private:
	std::vector<float> vertVals;
	std::vector<float> triangleVerts;
	std::vector<float> textureCoords;
	std::vector<float> stVals;
	std::vector<float> normals;
	std::vector<float> normVals;
public:
	ModelImporter();
	void parseOBJ(const char *filePath);//逐行读取OBJ文 件,分别处理v、vt、vn和f这4种情况
	int getNumVertices();
	std::vector<float> getVertices();
	std::vector<float> getTextureCoordinates();
	std::vector<float> getNormals();
};

ImportedModel.cpp

#include <fstream>
#include <sstream>
#include <glm\glm.hpp>
#include "ImportedModel.h"
using namespace std;

ImportedModel::ImportedModel() {
    
    }

ImportedModel::ImportedModel(const char *filePath) {
    
    
	ModelImporter modelImporter = ModelImporter();
	modelImporter.parseOBJ(filePath); 使用modelImporter获取顶点信息
	numVertices = modelImporter.getNumVertices();
	std::vector<float> verts = modelImporter.getVertices();
	std::vector<float> tcs = modelImporter.getTextureCoordinates();
	std::vector<float> normals = modelImporter.getNormals();

	for (int i = 0; i < numVertices; i++) {
    
    
		vertices.push_back(glm::vec3(verts[i*3], verts[i*3+1], verts[i*3+2]));
		texCoords.push_back(glm::vec2(tcs[i*2], tcs[i*2+1]));
		normalVecs.push_back(glm::vec3(normals[i*3], normals[i*3+1], normals[i*3+2]));
	}
}

int ImportedModel::getNumVertices() {
    
     return numVertices; }
std::vector<glm::vec3> ImportedModel::getVertices() {
    
     return vertices; }
std::vector<glm::vec2> ImportedModel::getTextureCoords() {
    
     return texCoords; }
std::vector<glm::vec3> ImportedModel::getNormals() {
    
     return normalVecs; }

// ---------------------------------------------------------------

ModelImporter::ModelImporter() {
    
    }

//逐行读取OBJ文 件,分别处理v、vt、vn和f这4种情况
void ModelImporter::parseOBJ(const char *filePath) {
    
    
	float x, y, z;
	string content;
	ifstream fileStream(filePath, ios::in);
	string line = "";
	while (!fileStream.eof()) {
    
    
		getline(fileStream, line);
		if (line.compare(0, 2, "v ") == 0) {
    
    // 顶点位置("v"的情况)
			stringstream ss(line.erase(0, 1)); //使用erase()跳过初始的v、vt、vn或f字符
			ss >> x; ss >> y; ss >> z;// 提取顶点位置数值
			vertVals.push_back(x);
			vertVals.push_back(y);
			vertVals.push_back(z);
		}
		if (line.compare(0, 2, "vt") == 0) {
    
    // 纹理坐标
			stringstream ss(line.erase(0, 2));// 提取纹理坐标数值
			ss >> x; ss >> y;
			stVals.push_back(x);
			stVals.push_back(y);
		}
		if (line.compare(0, 2, "vn") == 0) {
    
    // 顶点法向量
			stringstream ss(line.erase(0, 2));// 提取法向量数值
			ss >> x; ss >> y; ss >> z;
			normVals.push_back(x);
			normVals.push_back(y);
			normVals.push_back(z);
		}
		if (line.compare(0, 2, "f ") == 0) {
    
    // 三角形面("f"的情况)
			string oneCorner, v, t, n;
			stringstream ss(line.erase(0, 2));
			for (int i = 0; i < 3; i++) {
    
    
				getline(ss, oneCorner, ' ');// 提取三角形面引用
				stringstream oneCornerSS(oneCorner);
				getline(oneCornerSS, v, '/');
				getline(oneCornerSS, t, '/');
				getline(oneCornerSS, n, '/');

				int vertRef = (stoi(v) - 1) * 3;// "stoi"将字符串转化为整型
				int tcRef = (stoi(t) - 1) * 2;
				int normRef = (stoi(n) - 1) * 3;
				// 构建顶点向量
				triangleVerts.push_back(vertVals[vertRef]);
				triangleVerts.push_back(vertVals[vertRef + 1]);
				triangleVerts.push_back(vertVals[vertRef + 2]);
				// 构建纹理坐标向量
				textureCoords.push_back(stVals[tcRef]);
				textureCoords.push_back(stVals[tcRef + 1]);
				// 法向量的向量
				normals.push_back(normVals[normRef]);
				normals.push_back(normVals[normRef + 1]);
				normals.push_back(normVals[normRef + 2]);
			}
		}
	}
}
int ModelImporter::getNumVertices() {
    
     return (triangleVerts.size()/3); }
std::vector<float> ModelImporter::getVertices() {
    
     return triangleVerts; }
std::vector<float> ModelImporter::getTextureCoordinates() {
    
     return textureCoords; }
std::vector<float> ModelImporter::getNormals() {
    
     return normals; }

main.cpp

#include <GL\glew.h>
#include <GLFW\glfw3.h>
#include <SOIL2\soil2.h>
#include <string>
#include <iostream>
#include <fstream>
#include <cmath>
#include <glm\glm.hpp>
#include <glm\gtc\type_ptr.hpp> // glm::value_ptr
#include <glm\gtc\matrix_transform.hpp> // glm::translate, glm::rotate, glm::scale, glm::perspective
#include "ImportedModel.h"
#include "Utils.h"
using namespace std;

#define numVAOs 1
#define numVBOs 3

float cameraX, cameraY, cameraZ;
float objLocX, objLocY, objLocZ;
GLuint renderingProgram;
GLuint vao[numVAOs];
GLuint vbo[numVBOs];
GLuint shuttleTexture;

// variable allocation for display
GLuint mvLoc, projLoc;
int width, height;
float aspect;
glm::mat4 pMat, vMat, mMat, mvMat;
// 在顶层声明中使用模型导入器
ImportedModel myModel("shuttle.obj");

float toRadians(float degrees) {
    
     return (degrees * 2.0f * 3.14159f) / 360.0f; }

void setupVertices(void) {
    
    
 	std::vector<glm::vec3> vert = myModel.getVertices();
	std::vector<glm::vec2> tex = myModel.getTextureCoords();
	std::vector<glm::vec3> norm = myModel.getNormals();

	std::vector<float> pvalues;
	std::vector<float> tvalues;
	std::vector<float> nvalues;

	for (int i = 0; i < myModel.getNumVertices(); i++) {
    
    
		pvalues.push_back((vert[i]).x);
		pvalues.push_back((vert[i]).y);
		pvalues.push_back((vert[i]).z);
		tvalues.push_back((tex[i]).s);
		tvalues.push_back((tex[i]).t);
		nvalues.push_back((norm[i]).x);
		nvalues.push_back((norm[i]).y);
		nvalues.push_back((norm[i]).z);
	}

	glGenVertexArrays(1, vao);
	glBindVertexArray(vao[0]);
	glGenBuffers(numVBOs, vbo);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glBufferData(GL_ARRAY_BUFFER, pvalues.size() * 4, &pvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glBufferData(GL_ARRAY_BUFFER, tvalues.size() * 4, &tvalues[0], GL_STATIC_DRAW);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
	glBufferData(GL_ARRAY_BUFFER, nvalues.size() * 4, &nvalues[0], GL_STATIC_DRAW);
}

void init(GLFWwindow* window) {
    
    
	renderingProgram = Utils::createShaderProgram("vertShader.glsl", "fragShader.glsl");
	cameraX = 0.0f; cameraY = 0.0f; cameraZ = 1.5f;
	objLocX = 0.0f; objLocY = 0.0f; objLocZ = 0.0f;

	glfwGetFramebufferSize(window, &width, &height);
	aspect = (float)width / (float)height;
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);

	setupVertices();
	shuttleTexture = Utils::loadTexture("spstob_1.jpg");
}

void display(GLFWwindow* window, double currentTime) {
    
    
	glClear(GL_DEPTH_BUFFER_BIT);
	glClearColor(0.0, 0.0, 0.0, 1.0);
	glClear(GL_COLOR_BUFFER_BIT);

	glUseProgram(renderingProgram);

	mvLoc = glGetUniformLocation(renderingProgram, "mv_matrix");
	projLoc = glGetUniformLocation(renderingProgram, "proj_matrix");

	vMat = glm::translate(glm::mat4(1.0f), glm::vec3(-cameraX, -cameraY, -cameraZ));
	mMat = glm::translate(glm::mat4(1.0f), glm::vec3(objLocX, objLocY, objLocZ));

	mMat = glm::rotate(mMat, 0.0f, glm::vec3(1.0f, 0.0f, 0.0f));
	mMat = glm::rotate(mMat, toRadians(135.0f), glm::vec3(0.0f, 1.0f, 0.0f));
	mMat = glm::rotate(mMat, toRadians(35.0f), glm::vec3(0.0f, 0.0f, 1.0f));

	mvMat = vMat * mMat;

	glUniformMatrix4fv(mvLoc, 1, GL_FALSE, glm::value_ptr(mvMat));
	glUniformMatrix4fv(projLoc, 1, GL_FALSE, glm::value_ptr(pMat));

	glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);

	glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(1);

	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, shuttleTexture);

	glEnable(GL_DEPTH_TEST);
	glDepthFunc(GL_LEQUAL);
	glDrawArrays(GL_TRIANGLES, 0, myModel.getNumVertices());
}

void window_size_callback(GLFWwindow* win, int newWidth, int newHeight) {
    
    
	aspect = (float)newWidth / (float)newHeight;
	glViewport(0, 0, newWidth, newHeight);
	pMat = glm::perspective(1.0472f, aspect, 0.1f, 1000.0f);
}

int main(void) {
    
    
	if (!glfwInit()) {
    
     exit(EXIT_FAILURE); }
	glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 4);
	glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);
	GLFWwindow* window = glfwCreateWindow(600, 600, "Chapter6 - program1", NULL, NULL);
	glfwMakeContextCurrent(window);
	if (glewInit() != GLEW_OK) {
    
     exit(EXIT_FAILURE); }
	glfwSwapInterval(1);

	glfwSetWindowSizeCallback(window, window_size_callback);

	init(window);

	while (!glfwWindowShouldClose(window)) {
    
    
		display(window, glfwGetTime());
		glfwSwapBuffers(window);
		glfwPollEvents();
	}

	glfwDestroyWindow(window);
	glfwTerminate();
	exit(EXIT_SUCCESS);
}

Guess you like

Origin blog.csdn.net/weixin_44848751/article/details/130900403