Phong lighting model principle and shader implementation

Lighting in the real world is extremely complex and depends on too many factors that we cannot calculate with our limited processing power. Therefore, lighting in OpenGL is based on realistic approximations using simplified models that are easier to work with and look relatively similar.

These lighting models are based on our understanding of the physics of light. One of these models is called the Phong lighting model. The main building blocks of the Phong lighting model consist of 3 components: ambient lighting, diffuse lighting, and specular lighting. Below you can see what these lighting components look like individually and combined:
insert image description here

Recommendation: Use NSDT Designer to quickly build programmable 3D scenes.

  • Ambient lighting: Even in darkness, there is usually still some light somewhere in the world (moon, distant light), so objects are almost never completely dark. To simulate this we use the ambient lighting constant which always gives some color to the object.
  • Diffuse Lighting: Simulates the directional effect of light on objects. This is the most visually important component of the lighting model. The closer a part of an object is to a light source, the brighter it is.
  • Specular Lighting: Simulates the points of light that appear on shiny objects. Specular highlights are more towards the color of the light than the color of the object.

In order to create a visually interesting scene, we simulate at least these 3 lighting components. We'll start with the easiest one: ambient lighting.

1. Ambient lighting

Light usually does not come from a single source, but from many sources scattered around us, even if they are not immediately visible. One of the properties of light is that it can scatter and reflect in many directions, reaching points that cannot be seen directly; therefore, light can reflect off other surfaces and have an indirect effect on the illumination of objects. Algorithms that take this into account are called global illumination algorithms, but these are complex and computationally expensive.

Since we're not too fond of complex and expensive algorithms, we'll start with a very simple model of global illumination called ambient lighting. As you can see in the previous section, we use a small constant (light) color that is added to the final result color of the object's fragment, making it appear that there is always some diffuse light even when there is no direct light source.

Adding ambient lighting to a scene is very simple. We take the light's color, multiply it by a small constant ambient factor, multiply it with the object's color, and use that as the fragment's color in the cube's object shader:

void main()
{
    float ambientStrength = 0.1;
    vec3 ambient = ambientStrength * lightColor;

    vec3 result = ambient * objectColor;
    FragColor = vec4(result, 1.0);
}  

If you run the program now, you'll notice that the lighting from the first stage has now been successfully applied to the object. The object is very dark, but not completely dark, because of the ambient lighting applied (note that the light cube is not affected because we use a different shader). It should look like this:
insert image description here

2, diffuse lighting

Ambient lighting by itself won't produce the most interesting results, but diffuse lighting can start to have a noticeable visual impact on objects. The closer the fragments of an object are to the light emitted by the light source, the more brightness diffuse lighting will impart to the object. To give you a better idea of ​​diffuse lighting, take a look at the image below:
insert image description here

On the left we find a light source whose rays are aimed at individual fragments of the object. We need to measure at what angle the light hits the debris. The light has the greatest effect if it is perpendicular to the surface of the object. To measure the angle between the ray and the fragment, we use something called the normal vector, the vector perpendicular to the surface of the fragment (depicted here as a yellow arrow); we'll discuss that later. The angle between the two vectors can then be easily calculated using the dot product.

You may recall from the transformation chapter that the smaller the angle between two unit vectors, the more the dot product tends to have a value of 1. The dot product becomes 0 when the angle between the two vectors is 90 degrees. The same applies to θ: the larger θ is, the less the light will affect the color of the fragment.

Note that to get (only) the cosine of the angle between two vectors, we'll be using unit vectors (vectors of length 1), so we need to make sure all vectors are normalized, otherwise the dot product returns more than just the cosine (see Conversions).

The resulting dot product returns a scalar that we can use to calculate the light's effect on the fragment's color, resulting in differently lit fragments depending on the fragment's direction to the light.

So, what do we need to calculate diffuse lighting:

  • Normal Vector: The vector normal to the vertex surface.
  • Directional Ray: A direction vector, which is the difference vector between the light's position and the fragment's position. To calculate this ray, we need the ray's position vector and the fragment's position vector.

3. Normal vector

The normal vector is the (unit) vector perpendicular to the surface of the vertex. Since a vertex itself has no surface (it's just a single point in space), we find out the vertex's surface by using its surrounding vertices to retrieve the normal vector.

We can use a little trick by using a cross product to calculate the normal vectors for all the cube vertices, but since a 3D cube is not a complex shape, we can simply add them manually to the vertex data. The updated vertex data array can be found here. Try to imagine that the normal is indeed a vector perpendicular to the surface of each plane (the cube is made of 6 planes).

Since we added extra data to the vertex array, we should update the cube's vertex shader:

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
...

Now that we added normal vectors to each vertex and updated the vertex shader, we should also update the vertex attribute pointers. Note that the light's cube uses the same vertex array for its vertex data, but the light shader does not use the newly added normal vector. We don't have to update the light's shader or attribute configuration, but we must at least modify the vertex attribute pointers to reflect the new vertex array size:

glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 6 * sizeof(float), (void*)0);
glEnableVertexAttribArray(0);

We only want to use the first 3 floats of each vertex and ignore the last 3 floats, so we just need to update the stride parameter to be 6 times the size of the float.

Using vertex data that is not fully used by the lamp shader may seem inefficient, but the vertex data is already stored in GPU memory from the container object, so we don't have to store new data into GPU memory. This actually makes it more efficient than allocating a new VBO just for the lamp.
All lighting calculations are done in the fragment shader, so we need to forward the normal vector from the vertex shader to the fragment shader. Let's do this:

out vec3 Normal;

void main()
{
    gl_Position = projection * view * model * vec4(aPos, 1.0);
    Normal = aNormal;
} 

All that's left to do is declare the corresponding input variables in the fragment shader:

in vec3 Normal;  

4. Calculate the diffuse color

Now we have the normal vector for each vertex, but we still need the light's position vector and the fragment's position vector. Since the light's position is a single static variable, we can declare it as a uniform variable in the fragment shader:

uniform vec3 lightPos;

Then update the uniform in the render loop (or outside, since it doesn't change every frame). We use the lightPos vector declared in the previous chapter as the position of the diffuse light:

lightingShader.setVec3("lightPos", lightPos);  

Then the last thing we need is the location of the actual fragment. We'll be doing all our lighting calculations in world space, so we need the vertex positions in world space first. We can do this by multiplying the vertex position attribute with only the model matrix (rather than the view and projection matrices) to convert it to world space coordinates. This can be easily done in the vertex shader, so let's declare an output variable and calculate its world space coordinates:

out vec3 FragPos;  
out vec3 Normal;
  
void main()
{
    gl_Position = projection * view * model * vec4(aPos, 1.0);
    FragPos = vec3(model * vec4(aPos, 1.0));
    Normal = aNormal;
}

Finally add the corresponding input variables to the fragment shader:

in vec3 FragPos;  

This input variable will be interpolated from the triangle's 3 world position vectors to form the FragPos vector, the world position of each fragment. Now that all the required variables are set, we can start the lighting calculations.

The first thing we need to calculate is the direction vector between the light source and the fragment position. We know from the previous section that the light's direction vector is the difference vector between the light's position vector and the fragment's position vector. As you may remember from the conversion chapter, we can easily calculate this difference by subtracting the two vectors. We also want to make sure that all relevant vectors end up as unit vectors, so we normalize the normal vector and the resulting direction vector:

vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos - FragPos);  

When computing lighting, we usually don't care about the magnitude of a vector or its position; we care about the magnitude of the vector. We only care about their orientation. Since we only care about their orientation, almost all calculations are done using unit vectors, since it simplifies most calculations (like dot products). Therefore, when doing lighting calculations, make sure to always normalize the relevant vectors to ensure they are actual unit vectors. Forgetting to normalize a vector is a common mistake.

Next, we need to calculate the diffuse influence of the light on the current fragment by taking the dot product between the norm and lightDir vectors. The resulting value is then multiplied by the color of the light to obtain the diffuse component, the larger the angle between the two vectors, the darker the diffuse component:

float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;

If the angle between the two vectors is greater than 90 degrees, the result of the dot product will actually become negative, ending up with a negative diffuse component. Therefore, we use the max function to return the highest of the two parameters to ensure that the diffuse component (and thus the color) never goes negative. Negative lighting isn't really defined, so it's best to stay away from it unless you're one of those wacky artists.

Now that we have our ambient and diffuse components, we add the two colors to each other, then multiply the result with the object's color to get the resulting fragment's output color:

vec3 result = (ambient + diffuse) * objectColor;
FragColor = vec4(result, 1.0);

If your application (and shaders) compiled successfully, you should see something like this:
insert image description here

You can see that with diffuse lighting, the cube starts to look like a real cube again. Try to visualize the normal vector in your head, and move the camera around the cube, and you'll find that the larger the angle between the normal vector and the light direction vector, the darker the fragment becomes.

If you get stuck, compare your source code with the full source code here.

5. One last thing

In the previous section we passed the normal vector directly from the vertex shader to the fragment shader. However, the calculations in the fragment shader are all done in world space, so shouldn't we convert the normal vectors to world space coordinates as well? Basically yes, but it's not as simple as simply multiplying it with the model matrix.

First, a normal vector is just a direction vector and does not represent a specific location in space. Second, normal vectors do not have homogeneous coordinates (the w component of the vertex position). This means translation should not have any effect on the normal vector. So if we want to multiply the normal vector with the model matrix, we need to remove the translation part of the matrix by the 3x3 matrix in the upper left corner of the model matrix (note that we can also set the w component of the normal vector to 0 and multiply with the 4x4 matrix).

Second, if the model matrix performs non-uniform scaling, the vertices will change in such a way that the normal vector is no longer perpendicular to the surface. The figure below shows the effect of such a model matrix (with non-uniform scaling) on ​​the normal vector:

insert image description here

Whenever we apply non-uniform scaling (note: uniform scaling only changes the magnitude of the normal, not its direction, which can be easily fixed with normalization), the normal vector is no longer perpendicular to the corresponding surface, which distorts the lighting.

The trick to fix this behavior is to use a different model matrix tailored specifically for normal vectors. This matrix is ​​called the normal matrix, and uses some linear algebra operations to remove the effect of incorrectly scaling the normal vector. If you want to know how this matrix is ​​calculated, I recommend you to read this article.

The normal matrix is ​​defined as "the transpose of the inverse of the upper left 3x3 part of the model matrix". Phew, that's a bit of a mouthful, and don't worry if you don't quite understand what that means; we haven't talked about inverses and transposes. Note that most resources define the normal matrix as derived from the modelview matrix, but since we're working in world space (not view space), we'll derive it from the model matrix.

In the vertex shader, we can use the inverse and transpose functions available in the vertex shader for any matrix type to generate the normal matrix. Note that we convert the matrix to a 3x3 matrix to ensure it loses its translation properties and can be multiplied with the vec3 normal vector:

Normal = mat3(transpose(inverse(model))) * aNormal;  

Inverting matrices is an expensive operation for shaders, so avoid doing so whenever possible, as they must be done on every vertex in the scene. For learning purposes this is fine, but for efficient applications you may want to compute the normal matrix on the CPU and send it to the shader via unity before drawing (just like the model matrix).

In the diffuse lighting part, the lighting works fine because we're not doing any scaling on the object, so we don't really need to use the normal matrix, we just multiply the normal with the model matrix. However, if you're doing non-uniform scaling, you'll have to multiply the normal vector with the normal matrix.

6. Mirror lighting

If you're not exhausted by all the lighting discussions, we can start to finish the Phong lighting model by adding specular highlights.

Similar to diffuse lighting, specular lighting is based on the light's direction vector and the object's normal vector, but this time it's also based on the view direction, i.e. which direction the player is looking at the fragment from. Specular lighting is based on the reflective properties of surfaces. If we think of the surface of an object as a mirror, the specular lighting is strongest wherever we see reflected light on the surface. You can see this effect in the image below:

insert image description here

We calculate the reflection vector by reflecting the light direction around the normal vector. We then calculate the angular distance between this reflection vector and the view direction. The closer the angle between them, the greater the influence of the specular light. The resulting effect is that we see a little bright spot when we look at the direction of the light reflected through the surface.

The view vector is an extra variable required for specular lighting, we can calculate it using the viewer's world space position and the fragment's position. We then calculate the intensity of the specular reflection, multiply it by the light color, and add it to the ambient and diffuse components.

We chose to do lighting calculations in world space, but most people prefer to do lighting in view space. One advantage of view space is that the observer's position is always at (0,0,0), so you already have the observer's position easily. However, I find calculating lighting in world space more intuitive for learning purposes. If you still want to calculate lighting in view space, you also need to use the view matrix to transform all relevant vectors (don't forget to change the normal matrix too).

In order to get the world space coordinates of the viewer, we just need to get the position vector of the camera object (of course the viewer). So let's add another uniform to the fragment shader, and pass the camera position vector to the shader:

uniform vec3 viewPos;
lightingShader.setVec3("viewPos", camera.Position); 

Now that we have all the required variables, we can calculate the specular strength. First, we define a specular strength value that gives the specular highlight a mid-bright color so it doesn't have too much of an impact:

float specularStrength = 0.5;

If we set it to 1.0f, we get a very bright specular component, which is a bit too much for the coral cube. In the next chapter we'll discuss how to get all these lighting intensities right and how they affect objects. Next we compute the view direction vector and the corresponding reflection vector along the normal axis:

vec3 viewDir = normalize(viewPos - FragPos);
vec3 reflectDir = reflect(-lightDir, norm); 

Note that we negate the lightDir vector. The Reflect function expects the first vector to point from the light to the location of the fragment, but the lightDir vector is currently pointing in the opposite direction: from the fragment to the light (depending on the order of subtraction we did earlier when calculating the lightDir vector). To make sure we get the correct reflection vector, we first reverse the direction of the lightDir vector by negating it. The second argument expects a normal vector, so we provide the normalized norm vector.

Then all that's left to do is actually calculate the specular component. This is done with the following formula:

float spec = pow(max(dot(viewDir, reflectDir), 0.0), 32);
vec3 specular = specularStrength * spec * lightColor;  

We first calculate the dot product between the view direction and the reflection direction (and make sure it's not negative), then raise it to the power of 32. This 32 value is the glossiness value of the highlight. The higher the glossiness value of an object, the more it correctly reflects light instead of scattering it around, so the highlights become smaller. Below you can see an image showing the visual impact of different glossiness values:
insert image description here

We don't want the specular component to be too distracting, so we keep the exponent at 32. The only thing left to do is add it to the ambient and diffuse components and multiply the combined result with the object's color:

vec3 result = (ambient + diffuse + specular) * objectColor;
FragColor = vec4(result, 1.0);

Now we calculate all the lighting components of the Phong lighting model. Depending on your viewing angle, you should see the following:

You can find the full source code of the app here .


Original Link: Phong Lighting Model—BimAnt

Guess you like

Origin blog.csdn.net/shebao3333/article/details/131888259