Study Notes 26

Note that the coordinate system here is based on the lower left corner as the origin.

Then the larger the coordinates with respect to red and green, the larger the value of the channel component.

cerr - Zucc_zt - Blog Garden (cnblogs.com)

Here cerr is mainly reflected in two places, one is unbuffered, and the second is bound to the display output.

PPM Viewer (rhodes.edu)

An online viewer about ppm. There are many pitfalls in this area. First of all, we must ensure that the code is written correctly, because the content format of the picture has strict requirements, and even if there is a slight error, it cannot be opened.

Then after adding cerr here, there will be the above two when running, similar to a countdown effect.

This can be achieved because a \r is written in his cerr. This means move the output cursor to the first position of the current line. That is to say, after the first output, the second output does not enter the next line, but goes to the first position of the current line again, then when it is output again, it will cover the first output.

So such a countdown-like effect.

The initial value list here can use {} to write the initial value in it.

Note here that the light has two directions.

The thing to do here is to build something like the picture above.

First of all, ray tracing is mainly divided into three steps: the first is to calculate the ray, from the eye to the pixel. Second, find the intersection point after the light is sent out. The third pair of intersection points performs shading calculations.

Here first determine the screen first, because the two sides of the previous 256×256 are the same length, sometimes it may be easy to mix, so here we choose 16:9

Then there is the viewport, where the viewport is a fictitious thing in the scene, used for light tracing, and finally maps the color calculated by the pixels above it to our screen. Here the height is 2 and the width follows the ratio. z is -1, the coordinate system here is right-handed, and the camera is facing the -Z axis.

Adding a sphere, the insection judgment about the sphere is relatively simple.

The modulus of the vector between the center of the sphere and the point is the distance between the two points, and then compared with the radius.

Of course, we have a point, we have a point about t, and then substitute it into the formula, in fact, if we can solve t, it means that there is an intersection, otherwise there will be no intersection. (Of course, the physical meaning of t must be considered.)

Solving t is a quadratic equation, so here we only need to ask for delta.

The upper part is the coefficient correlation.

After the judgment is completed, there are still some small problems here. For example, when we change the 00-1 ball to 001, we will actually see the same thing.

This is the lack of consideration of the actual physical meaning of t,

Considering the actual shading, we need to get the normal. There is a problem of normalizing the normal, which is needed for convenient calculation.

About simplification. The formula is simplified, but in terms of computational efficiency, it can only be said that there is no qualitative change compared with the original.

For it, you can remember it, that is, you still memorize the previous root-finding formula, here is to replace the previous formula b with h, remove all coefficients, and do not remove the negative sign.

After the optimization is completed, consider the hit object here. If it is multiple balls, what if it is a variety of objects?

Once multiple types are involved, there is basically inheritance.

Various objects here have an inevitable behavior, that is: to judge whether they intersect.

So a base class can be abstracted here. And it can be directly a virtual base class.

Then standardize an interface, where one more range is passed in than our previous collision intersection, which can save a lot of things. Pass in a range of t,

Those outside the range are no longer considered. For example, if the range is greater than 0, then t is not considered if it is less than 0, that is, the objects behind the camera are also excluded.

Then there are parameters and light rays. The radius of the center of the sphere before does not need to be written here. This is the benefit of encapsulating it into a class, because this function is turned into a member function, then

All properties of the class are accessed directly, so there is no need to pass them as parameters.

What needs to be considered here is, what happens after the intersection? When I was doing it before, there was a return value output, but here it is not necessarily who the intersect with, so the return value is not sure

What type is it? Here I choose to encapsulate a structure, and then install some things that are often returned.

This involves a problem of the direction of the normal line. Since we use two points to find the normal line, it is really not certain which of these two points points to whom.

Before, it was pointing outward along the radius, but if we have a refraction of a transparent ball, when the light enters the ball and then goes out, the normal is facing inward.

The correct normal direction should always be against the ray's direction.

According to the above method of looking from inside to outside, it is actually: if the light is aimed at the surface of the intersection from the inside to the outside, it needs to be the normal line inward, if it is from the outside to the inside, it is outward

normals.

Regarding whether the ray is inside or outside, here you can make a comparison with the direction of Normal, and you can determine it according to the sign of dot.

Note here, under the previous rasterizer rasterization pipeline, this dot is often used to make a back crop judgment.

There may be doubts here, if you use dot to judge the inside and outside of the light, when will you make this cropping judgment, don’t these two conflict?

In fact, the key point is that there is no clipping at all under the ray tracing system, and all surfaces do not need to be clipped. Cropped how many ejection tracking you did.

So the dot here is just used to make this internal and external judgment.

So the final processing method is: the geometry itself stores the outward normal, and then judges whether the normal needs to be reversed according to the inside and outside of the light, and records it.

Because a shadin point is not only colored once under the ray tracing system, but may be irradiated by light multiple times, so it only needs to be calculated and stored once.

(121 messages) error c2243: "type conversion" conversion exists, but cannot be accessed

The above three, the first is to define a subclass variable, and the second is to pass the subclass variable to the second parameter. The third is the function prototype, and the second formal parameter is a reference to the parent class.

First of all, this kind of gameplay is feasible, that is, the reference of the parent class points to the object of the subclass. (The parent class here is a virtual base class)

The main problem lies in the following. When making a smart pointer below, an object of this type will be created. Here, the <> parameter passes the virtual base class type, so the object must not be created.

So something is wrong here, and I have been struggling with it for nearly an hour to find it out.

First of all, the above question arises, that is, the light intersects with an object, and then intersects with another object.

A very important point is ignored above, now it is ray tracing, and the judgment of this list is all for this pixel. Colorize the pixels immediately after coming out.

Generate random numbers. The main idea here is to generate a random number from 0 to 1, and then use range mapping to transform the obtained random number into the required range.

A new way of playing in C++.

The main logic of anti-aliasing here is MSAA, but it is not strictly MSAA. It uses a few rays of light in a random range around it, and performs a calculation on the final result.

average.

Here is an encapsulation of the camera class, because basically all the information of the camera is for one thing: to generate ray, so directly wrap all its attributes

Then open an interface for generating ray to the outside world.

Here we mainly talk about the correspondence between material and geometry. In some cases, one material may be used on multiple geometries, and sometimes it may be reversed, and there are bounds between geometry and material.

Here we choose separate.

It is about diffuse reflection, the direction is randomized, and the color is the surrounding environment + its own intrinsic to modulate the final result.

Then there are many random algorithms in this direction, here is a lazy hack.

The main idea is that there are two unit spheres tangent to it at the shadin point, one is outside and the other is inside, and we choose one of them to be at the same position as the camera.

One side, then randomly pick a point inside the ball.

Let’s talk about the macroscopic idea first. If we want to calculate the color of point p, it must be the result of continuing to track other lights shining on him.

That is to say, we need to find out the incident direction of an actual light, which is now the direction we trace down.

The way to find it is the one mentioned above, but it is not complete. During the actual operation, we can get the position of point P, plus the normal of the unit vector, we can get the P above.

Then we will use a random number, random three values ​​​​from -1 to 1, and then accumulate with this p to get an arbitrary point inside P (random -1 to 1 may exceed the inside of the ball

, so the random result is tested here, and if it fails, one will be randomly obtained again. )

Here comes the core part, which is here, forming a recursion.

The recursion here lacks a base case. Of course, it's not too small, because his base case trace can't reach anything, it just won't intersect with the object.

But wait until this basecase, in some cases the stack may explode.

So a base case is added here, that is, the number of recursive layers allowed, up to 50 layers.

In fact, the end point of ray tracing should be a certain light source, but it does not seem to be a light source here.

In fact, yes, the ultimate relationship here is not to intersect with objects, but who to intersect with. Here it is actually assumed to intersect with the sky. Finally, the color returned by recursing to basecase is actually sampled

The color of the sky, when we first generated a picture, its color changed from white below to blue. In fact, it is similar to the color of the sky.

So the final rendered result here looks very similar to an object illuminated by a sky light.

I made a mistake when recursing. Here, the recursion should be the current position, and the origin of this r is the last sending point. The position here is stored in rec.

The effect is very poor. Because they all think it was sent from the origin, not recursively.

After the modification, the execution time is significantly longer.

This time it went straight to black. . . .

It is found that the order of the two things added in the world is different.

But I found that, in fact, the author's code does not match his picture a bit.

If the ball is rendered in front, then it has to be behind in the array.

because the code

When writing, it traverses from front to back. In fact, there is a problem with the code here. I will talk about it later. Press this first. Then if we put the ball in the second position of the array, then

Then the record of the intersection point is the small ball, and the record parameter passed out is also the small ball, so the rendering is the small ball, so all parts of the small ball on the screen are

Not blocked.

So if you want the ball to be in the front, you have to put the ball in the back of the array.

But according to the logic here, we should not write a program to make them have a relationship with the position in the array. What is expected should be one that is not affected by the position in the array.

Indeed it is. In fact, we have recorded a variable here. The latest t has been useless, and I don’t know if the author forgot to post the code used.

That is to say, here we finally need to return the rec, its value should be the rec information of the nearest intersection point in the scene, not the last one in the array.

So we use this information to always maintain a recent one. Then compare, and update rec only when there is a smaller one.

After solving this problem in this way, the effect is still not quite right.

First of all, regarding the above question, we go to debug, in fact, we compare the correct effect, and then find the difference, and debug according to the difference.

Look at the big ball here, that is, the one that acts as the ground. Its surface should actually be very bright, because the light hitting it will most likely bounce directly into the sky.

That is, the recursion is terminated. The color should be the color of the sky and has not been reduced several times, so it should be very bright.

But it became very dark here, and it even looked pure black.

So here is a rough estimate of the viewport coordinates of the dark place here, and then use the loop breakpoint to find his loop, and track and debug step by step.

When we traced its first collision, there was no problem. As we expected, it hit the big ball. The focus is on the second ejection after the collision.

When judging the second ejection and the small ball, there is no intersection, and there is a problem when judging with the big ball.

Because our current ray starts on the big sphere, that is, there must be an intersection point where t<0 and a t==0 starting point. These are two points of intersection.

And our initial judgment condition is that as long as it is not smaller than the smallest t, then it is considered a collision here.

This is actually wrong. If it is equal to the smallest t, the smallest t here is 0, and the intersection point is also 0.

But this time there is no intersection. Therefore, when the solved t is less than or equal to the smallest t, it should be counted as no intersection. Because the smallest t is the starting point, and the starting point must intersect, but this is the last intersection point, this time it should be processed from the ejection,

So change < t_min to <= t_min above.

But there are still some places that are not rigorous enough, why? Because of the precision of floating-point numbers, here may lead to the t of the starting point of the solution, which is 0.00000001

This kind of number should be 0, so here we can actually modify t_min to 0.001, which will solve this precision problem and the previous problem.

And comparing these two effects, there are actually some differences, the former is 0.01, the latter is 0, and the latter has a lot more black spots on it.

(Because if it is judged that the current point is an intersection point, it will continue to intersect, and the final color will be 0.)

(121 messages) Ray tracing rendering practice: Monte Carlo path tracing and its c++ implementation_AkagiSenpai's Blog-CSDN Blog

Guess you like

Origin blog.csdn.net/yinianbaifaI/article/details/127702742