Study Notes 27

Here's how our hack approximates the real Lambert. When choosing a point for the ball, the high probability is in the belly part, and there is very little bottom, which corresponds to some

There are very few rays at the grazing angle, and these corresponding contributions that deviate from the found angle will be very small.

As for the real Lambert mentioned here, the method used is to find points on the ball and then calculate the cos.

So you need to get the point on the ball, this is very simple, just add a unit vector offset to the center of the ball. So there is a function to generate a random unit vector above,

After we have it, we add it to the center of the ball.

This calculation does not seem to have any essential difference, but the actual result will appear to have fewer black spots, and the above is the Lambert used.

Not only mine, but the author's picture has fewer black spots if you look carefully.

It is also said here that this Lambert is more uniform, that is, both of them have a higher probability of scattering for parts close to the normal, but Lambert's is more uniform.

Here he seems to mean that our hack is similar to a cos^3, and this Lambert is similar to a cos. But why did not explain. . .

In short, after changing Lambert, there are fewer shadows, the ball is brighter, and the surface is smoother, all because the scattering has become more random.

Note that we have been considering the normal offset above, what if there is no normal offset? (meaning that when looking for the target, p is used to add the normal first)

First look at the role of the normal, that is, when the random unit vector we get points to the inside of the ball, it is unreasonable to directly use this vector as the outgoing direction.

So here, if we first make an offset on the discovery, then all the points we consider are outside the surface, so that there will be no unreasonable results.

Of course, it is also possible to exclude the half that is emitted to the interior when generating the unit vector. In this way, the offset without using the normal can be random in all directions:

When considering materials, there are two ideas for class design. One is to design a material and open various parameters to adjust different effects.

There is also a variety of materials, and then directly adjust some of the first open parameters to zero (meaning it should be part of the hard code according to the type)

The latter is used here. Then it is still a similar idea, all materials have to solve one thing, in fact, it is BRDF, which is to give the incoming light, and you give the reflected light

Including directions, including information such as color.

So here is still encapsulation of pure virtual interface.

After encapsulation, let's think about the timing of calling the scatter function. It should be called after the hit call returns.

All our coloring work before was done through the hit data recorded in rec. Now we need to use the material member of the object when calculating, this requires a parameter

The transmission of parameters, and we used to reduce the transmission of parameters in the spirit of rec encapsulation.

So the pointer is also placed in the record here. So the overall process should be: first, the material should be a member of the object, then we need to assign the appropriate material to the object,

The object participates in the calculation of the hit, and the material should be put into the rec at this time. After the hit execution is completed, there are data such as the normal and position of the hit surface in rec, plus the scatter

The function can directly get the final color.

This piece is about the entire process above.

That's what it does in code/

With the interface specification of the parent material, we can create different sub-materials, and then give these sub-materials to different objects at the right time.

Let objects have different properties. Therefore, when calculating the color, the polymorphic calls are made to different subclass objects.

Note that a statement is added here due to the circular reference, and material is required in the record. And record is needed in material. If both files are included in each other, it is definitely wrong.

There are two things mentioned here, one is attenuation and absorption, the attenuation he said refers to white light coming in and red light going out, which is attenuation. In absorption, white light comes in directly, and no light goes out.

Then only attenuation can be considered here, that is, assuming white light, then what is reflected is albedo. You can also consider both. In order to ensure that the total amount of the two reflections is the same, you can

Let the p probability absorb, and then divide the reflection by a p, that is, the reflectivity is higher, but some places do not reflect directly.

Then there is another question about scatter_diretion, which is a randomly generated thing, that is, when the normal is completely opposite to our rand unit vector, here

There will be problems, because the reflection direction is 0 vector, and there will be calculation errors later.

So prepare a judgment here and discard this situation.

The fabs above are for the absolute value of floating point numbers.

The Ray color function understands that the parameter passed in is a ray, and its function is what color the ray is.

Then in the actual light path, it acts as a reflected light. According to Lambert, his light is the color of the incident light multiplied by the Lambert attenuation coefficient, which is the reflected light, which is our

desired result.

And how to calculate the color of the incident light, isn't it still ray color, so it is recursive.

In this way, we can add color to our object.

Here we start to consider specular reflection. This derivation is actually the same as the derivation in the previous rhombus.

Regarding the design of the reflection direction, here is a bit unclear, because the normal recorded in the rec here is a judgment and selection. This normal must be against ray, so the dot seems to be greater than 0.

There is only one case, tangent, that is equal to 0.

So here he is rewriting this time, if the upper side is tangent, it means that the light is not shining on this point, so the light should look black in the past.

Of course, because the scatter here is a standardized interface after all, because various material calculations are different, there may be some specific materials that require such a return value/

There is something wrong with the color here.

The middle one is more obvious and easier to debug.

Then I have been tracking his color, as follows:

The bottom lerp is recursive, because here I simplified the scene and used mirror reflection, there is only one ball, so there will be no more collisions after recursing once, so it will return,

We can see its return value. After returning, it is the return value of raycolor, and then multiply it by atten to return again, this time it will return to the call in main.

The atten set here is 1. The same value should have been received on the main side, but it has changed, so the problem occurs in this return.

In fact, the single-step debugging here can see that it has taken a multiplication overload of vec here. I did not expect that this overload was written wrong.

So I feel that the current debugging process is basically to lock the pixels, and then follow the color if the problem lies in the color. Basically, the reason can still be found.

So here is also a function, and it will be fine to call it directly to debug in the future, but it may be necessary to manually delete the output in the color, because the output takes a little time.

Now the effect is correct, but note here that our ball is currently acting as a mirror ball, that is, it is completely reflective and has not been refracted yet.

When considering that the surface of the object is not so smooth, it is possible to have the same reflected light for two incident angles with small differences. That's what it looks like above.

Then when ray tracing here, it can be considered that there is a small fluctuation in the reflected light, and the reflected light is added with an offset (the radius of the ball corresponds to the amount of the offset)

The radius of the ball can be used as a parameter.

In addition , he mentioned a problem here , that is, if the whole ball is very large, resulting in a local surface that is very flat, and then adding the grazing angle is likely to cause the result of our disturbance to enter the ball. (But this situation is not considered in his code. If we don’t consider it here, the traced light will really enter the ball after reflection, and then come out from another surface,

This is wrong, because if the ball is opaque, even if it is transparent, there must be a refraction angle. If we want to consider it, we have to modify the normal of our model so that it always faces outwards,

In this way, when the color is calculated, it will be black even if it comes out, but changing it to always face outward will cause the transparent ball to really enter the inside of the ball. . . . . There is another problem here. )

The derivation here is also relatively simple. The author did not give him the specific derivation process of R', but for the part of the brackets.

If we consider the incident ray R as a unit vector, then ncosθ is actually the length of the R projection in the n direction. (because both R and n are unit vectors)

Then add R to this, naturally the third side of the triangle formed by these two sides.

Then the comparison of the two η is actually the comparison of sin, which is 1 for the hypotenuse, and the sin value is actually the length of the horizontal side, then the ratio of the horizontal length is multiplied by the horizontal vector above.

What is obtained is the horizontal vector of the refraction part, which is R'vertical.

The rest is easy. Calculate the vertical and horizontal at R', and then get R'. Note that the vector obtained here is the addition of the two.

In reality。

The above algorithm has a lack of consideration, that is, when the object goes from a medium such as glass or water to air, it has a total reflection phenomenon.

At that time, our reflection angle sinθ' became greater than 1, which has no solution.

So there must be a restriction here. Once it is equal to 1, it means that the critical state has been reached. That is total reflection. If it does not reach, it can refract and reflect at the same time.

Refer to media for details.

This is the solution to sin.

Albedo in real life changes, depending on our viewing angle. In fact, this is the Fresnel phenomenon.

Finally, here is an interesting trick, that is, we make a ball with a negative radius. This ball is placed at the same position as the center of another ball, and then the absolute value of the negative number is smaller than the radius of the other ball.

What results will this bring, we know the trace code, first of all, radius is squared in almost all occasions, that is, when it is considered to intersect, it will not be affected after adding a minus sign, only the following, you can see it The sign of exactly affects the normals recorded in rec.

In general, the normal is actually reversed, but the above code is incomplete, and the later version is:

After this correction, it will be recorded in rec. Here's where it gets interesting,

First of all, when the light enters the inner ball from the outside, the front_face should be true, but because the normal is reversed here, it is false here.

Of course, normal here will get a direction with ray against anyway, and it has no effect on the next calculation.

The key is that the front here is unreasonably assigned/

But this thing was used by us just now, and we will determine the ratio of the refractive index based on this thing.

In general, due to the inversion of the front_face variable, it will be considered that our light is from the inside to the outside.

That is to say, the inside of the inner ball is air, the outside is glass, and so on.

And we actually put a glass on the outside of him, so that according to the calculation of the program, he will think that it is a glass ball, but it is a hollow glass ball inside.

When the aspect ratio is determined, it is enough to determine only one vertical and horizontal FOV, and the vertical one is considered here.

Regarding the modification of FOV, the virtual viewport is actually changed, which will affect the angle of the emitted ray. Change the size of the viewport surface.

Regarding the determination of the camera, here we specify three quantities, the position of the camera, the focus point of the camera, and the up direction.

With these three, an orthogonal coordinate axis can be constructed by cross product.

This piece says that the uw axis of the camera and the selected up are always on the same plane. This is easy to understand, because we pass the first axis of the cross, which is perpendicular to the plane determined by up and w, then the third One axis must also be in this plane, because it must be perpendicular to the plane of the other two axes.

With this point, it actually shows that, regarding the function of the up axis, we can control the surface where the axis of the camera is located by adjusting it. If it is said that he was tilted a few times, then the axis of the entire camera must also be tilted.

Because if he scales proportionally, his position before and after the distance from the camera will change accordingly. In fact, the above writing method is to put this surface on the focus distance.

Putting it on this surface is the same as placing it on other positions, and it has no effect, because the only function of this surface is to determine the direction of ray, and proportional scaling + forward and backward translation, ray will not change.

This aperture, here determines the radius of the prism, that is, determines the degree of focus blur.

Regarding the way to simplify the camera here, because all the light passing through the lens will focus on the sensor, which is our camera. So don't consider the section from sensor to lens

 

Guess you like

Origin blog.csdn.net/yinianbaifaI/article/details/127702755