3D rendering: face normals and vertex normals

Now that we've reviewed the parameters that affect the appearance of objects (their brightness, color, etc.), we're ready to start looking at some simple shading techniques.

insert image description here

Recommendation: Use NSDT editor to quickly build programmable 3D scenes

1. Normal

Normals play a central role in shading. We all know that if we point an object towards a light source, it will become brighter. The orientation of an object's surface plays an important role in the amount of light it reflects (and how bright it appears to be). This direction at any point P on the surface of the object can be expressed by the normal N perpendicular to the surface, as shown in Figure 1.
insert image description here

Figure 1: Notice how the sphere gets darker as the angle between the normal direction and the light direction increases.

Notice in Figure 1 how the brightness of the sphere decreases as the angle between the light direction and the normal direction increases. This dip in brightness is a phenomenon we see every day, but probably few people know why it happens. We will explain the reason for this phenomenon later. For now, just remember:

  • What we call a normal (which we denote with a capital N) is a vector perpendicular to the tangent to the surface at point P. In other words, to find the normal at point P, we need to trace a line tangent to the surface, then take the vector perpendicular to that tangent (note that in 3D this would be the tangent plane).
  • The brightness of a point on an object's surface depends on the normal direction, which defines the direction of the object's surface at that point relative to the light. Another way of saying this is that the brightness of any given point on an object's surface depends on the angle between the normal to that point and the direction of the light.

Now the question is how do we calculate this normal? The complexity of the solution to this problem can vary greatly depending on the type of geometry being rendered. The normal of a sphere is usually easy to find. If we know the position and center of a point on the surface of a sphere, we can calculate the normal to that point by subtracting the point's position from the center:

Vec3f N = P - sphereCenter;

If the object is a mesh of triangles, each triangle defines a plane, and a vector perpendicular to that plane is the normal to any point lying on the surface of that triangle. The vector perpendicular to the plane of the triangle can be easily obtained by the cross product of the two sides of the triangle. Remember v1xv2 = -v2xv1. So the choice of edge will affect the direction of the normal. If you declare the triangle vertices in counterclockwise order, you can use the following code:

Vec3f N = (v1-v0).crossProduct(v2-v0);

insert image description here

Figure 2: The face normal of a triangle can be computed by taking the cross product of the two sides of the triangle.
If the triangle is in the xz plane, the resulting normal should be (0,1,0) instead of (0,-1,0), as shown in Figure 2

Computing the normal in this way gives us what we call a face normal (since the normal is the same for the entire face, no matter what point you pick on that face or triangle). The normals of a triangle mesh can also be defined at the vertices of the triangles, in which case we refer to these normals as vertex normals. Vertex normals are used in a technique called smooth shading, which you'll find a description of at the end of this chapter. Currently, we only deal with face normals.

It does not matter how and when in the program the surface normals at the points to be shaded are calculated. It is important and important to have this information at hand when you are going to color this. In this section where we do some basic shading we implement a special method called getSurfaceProperties() in each geometry class where we calculate the normal at the intersection point (if using raytracing) and other variables such as texture coordinates which we'll discuss later in this lesson. For sphere and triangle mesh geometry types, the implementation of these methods is as follows:

class Sphere : public Object 
{ 
    ... 
public: 
    ... 
    void getSurfaceProperties( 
        const Vec3f &hitPoint, 
        const Vec3f &viewDirection, 
        const uint32_t &triIndex, 
        const Vec2f &uv, 
        Vec3f &hitNormal, 
        Vec2f &hitTextureCoordinates) const 
    { 
        hitNormal= Phit - center; 
        hitNormal.normalize(); 
        ... 
    } 
    ... 
}; 
 
class TriangleMesh : public Object 
{ 
    ... 
public: 
    void getSurfaceProperties( 
        const Vec3f &hitPoint, 
        const Vec3f &viewDirection, 
        const uint32_t &triIndex, 
        const Vec2f &uv, 
        Vec3f &hitNormal, 
        Vec2f &hitTextureCoordinates) const 
    { 
        // face normal
        const Vec3f &v0 = P[trisIndex[triIndex * 3]]; 
        const Vec3f &v1 = P[trisIndex[triIndex * 3 + 1]]; 
        const Vec3f &v2 = P[trisIndex[triIndex * 3 + 2]]; 
        hitNormal = (v1 - v0).crossProduct(v2 - v0); 
        hitNormal.normalize(); 
        ... 
    } 
    ... 
}; 

2. Simple coloring effect: surface ratio

Now that we know how to calculate the normal of a point on an object's surface, we have enough information to create a simple shading effect called a facing ratio. The technique consists of computing the dot product of the normal of the point we want to shade and the viewing direction. Calculating the viewing direction is also very simple. When using ray tracing, it's just the opposite direction of the ray at P where it intersects the surface. Without using ray tracing, the viewing direction can also be found simply by tracing a line from point P on the surface to the eye:

Vec3f V = (E - P).normalize(); // or -ray.dir if you use ray-tracing

Remember that the dot product of two vectors returns 1 if they are parallel and pointing in the same direction, and 0 if the two vectors are perpendicular to each other. If the vectors point in the opposite direction, the dot product is negative, but if we're using the result of that dot product as a color, then we're not interested in negative values ​​anyway. If you need an introduction to dot products, check out the geometry course. To avoid negative results, we need to limit the results to 0:

float facingRatio = std::max(0, N.dotProduct(V));

insert image description here

The dot product returns 1 when the normal and vector V point in the same direction. The result is 0 if the two vectors are perpendicular. If we use this simple technique to shade a sphere in the middle of the frame, the center of the sphere will be white, and as we move away from its center towards the edges, the sphere will become darker, as shown below.
insert image description here

Vec3f castRay( 
    const Vec3f &orig, const Vec3f &dir, 
    const std::vector<std::unique_ptr<Object>> &objects, 
    const Options &options) 
{ 
    Vec3f hitColor = options.backgroundColor; 
    float tnear = kInfinity; 
    Vec2f uv; 
    uint32_t index = 0; 
    Object *hitObject = nullptr; 
    if (trace(orig, dir, objects, tnear, index, uv, &hitObject)) { 
        Vec3f hitPoint = orig + dir * tnear;  //shaded point 
        Vec3f hitNormal; 
        Vec2f hitTexCoordinates; 
        // compute the normal of the point we want to shade
        hitObject->getSurfaceProperties(hitPoint, dir, index, uv, hitNormal, ...); 
        hitColor = std::max(0.f, hitNormal.dotProduct(-dir));  //facing ratio 
    } 
 
    return hitColor; 
} 

Congratulations! You just learned about your first shading technique. Let's now look at a more realistic approach to shading that will simulate the effect of light on diffuse objects. But before understanding this method, we first need to introduce and understand the concept of light.

3. Flat shading, smooth shading and vertex normals

The problem with triangular meshes is that they cannot represent perfectly smooth surfaces (unless the triangles are very small). If we wish to apply the aspect ratio technique just described to a polygonal mesh, we need to compute the normal of the triangle intersected by the ray, and calculate the aspect ratio as the dot product between the face normal and the view direction. The problem with this approach is that it gives the object a faceted appearance, as shown in the image below. Therefore this method of shading is called flat shading
insert image description here

As mentioned many times in previous lessons, the normal to a triangle can be found simply by computing the cross product of vectors v0v1 and v0v2, where v0, v1, and v2 represent the vertices of the triangle. To solve this problem, Henri Gouraud introduced a method in 1971, now called smooth shading or Gouraud shading.

The idea behind this technique is to produce continuous shadows on the surface of a polygonal mesh, even if the object represented by the mesh is not continuous because it is built from a collection of flat surfaces (polygons or triangles). To this end, Gouraud introduced the concept of vertex normals. The idea is simple. Instead of computing or storing normals for faces, we store normals at each vertex of the mesh, where the direction of the normal is determined by the underlying smooth surface the triangle mesh was converted from. When we want to compute the color of a point on a triangle's surface, we can compute "false smooth" normals by linearly interpolating the vertex normals defined at the triangle vertices using the hit point barycentric coordinates, instead of using the face normals.
insert image description here

The technique is shown in the diagram above. Vertex normals are defined at the vertices of the triangle. You can see that they are oriented perpendicular to the smooth underlying surface on which the triangle mesh is built. Sometimes triangular meshes are not converted directly from smooth surfaces, and vertex normals must be computed on the fly. There are different techniques for computing vertex normals when there is no smooth surface to compute vertex normals, but we won't be looking at them in this lesson. Now, use software like Maya or Blender to do this for you, in Maya you can select the polygon mesh and choose the Soften Edges option in the Normals menu.

In fact, from a practical and technical point of view, each triangle has its own set of 3 vertex normals. This means that the total number of vertex normals for a triangle mesh is equal to the number of triangles times 3. In some cases the vertex normals defined on vertices shared by 2, 3 or more triangles are the same (they point in the same direction), but you can achieve different directions by giving them different directions Effect. For example, some hard edges can be faked on the surface.

The source code to calculate the interpolated normal of any point on the surface of the triangle is very simple, as long as we know the vertex normal of the triangle, the barycentric coordinates of the point on the triangle, and the triangle index. Both rasterization and ray tracing can give you this information. Vertex normals are generated on the model by the 3D program you used to create the model. These are then exported to geometry files, which contain the connection information of the triangles, the vertex positions, and the texture coordinates of the triangles. Then all you need to do is combine the point barycentric coordinates and the triangle vertex normals to compute the point interpolated smooth normals (lines 17-20 below):

void getSurfaceProperties( 
    const Vec3f &hitPoint, 
    const Vec3f &viewDirection, 
    const uint32_t &triIndex, 
    const Vec2f &uv, 
    Vec3f &hitNormal, 
    Vec2f &hitTextureCoordinates) const 
{ 
    // face normal
    const Vec3f &v0 = P[trisIndex[triIndex * 3]]; 
    const Vec3f &v1 = P[trisIndex[triIndex * 3 + 1]]; 
    const Vec3f &v2 = P[trisIndex[triIndex * 3 + 2]]; 
    hitNormal = (v1 - v0).crossProduct(v2 - v0); 
 
#if 1 
    // compute "smooth" normal using Gouraud's technique (interpolate vertex normals)
    const Vec3f &n0 = N[trisIndex[triIndex * 3]]; 
    const Vec3f &n1 = N[trisIndex[triIndex * 3 + 1]]; 
    const Vec3f &n2 = N[trisIndex[triIndex * 3 + 2]]; 
    hitNormal = (1 - uv.x - uv.y) * n0 + uv.x * n1 + uv.y * n2; 
#endif 
 
    // doesn't need to be normalized as the N's are normalized but just for safety
    hitNormal.normalize(); 
 
    // texture coordinates
    const Vec2f &st0 = texCoordinates[trisIndex[triIndex * 3]]; 
    const Vec2f &st1 = texCoordinates[trisIndex[triIndex * 3 + 1]]; 
    const Vec2f &st2 = texCoordinates[trisIndex[triIndex * 3 + 2]]; 
    hitTextureCoordinates = (1 - uv.x - uv.y) * st0 + uv.x * st1 + uv.y * st2; 
} 

Note that this will only give the impression of a smooth surface. If you look at the polygonal sphere in the image below, you can still see that the outline is faceted, even though the inner surfaces appear to be smooth. This technique improves the appearance of triangular meshes, but of course does not completely solve the problem of their faceted appearance. The only solution to this problem is to use subdivision surfaces (which we discuss in a different section), or of course increase the number of triangles used when converting a smooth surface to a triangle mesh.

insert image description here

We're not ready to learn how to reproduce the look of a diffuse surface. Although diffuse surfaces require light to be visible. So, before looking at this technique, we first need to understand how to deal with the concept of light sources in a 3D engine.


Original link: Surface normal and vertex normal - BimAnt

Guess you like

Origin blog.csdn.net/shebao3333/article/details/132690273