Computer graphics-ray tracing algorithm analysis

1. Seeking

The main calculation amount of ray tracing comes from a large number of intersection calculations. Let O represent the starting point of the ray, D direction, P the point on the circle, C the center of the circle, and the radius of r. The equation of the ball is: (P-C)(P-C) = r * r, the parameter equation of the straight line: p(t) = O + tD.

Substituting the linear equation into D 2t 2+2(OC)Dt+(OC) 2-r 2=0, then use the quadratic equation of one variable to find the root formula to determine whether there is a solution. When there are two solutions, select >0 and The smaller t.

The basic principle of intersection is to substitute the parametric equation of the ray into the function of the circle and find the value of t.

  1. Substituting P(t) = O + tD into the circle equation will get the one-dimensional quadratic equation of t.
  2. First find out Vec op, op is the coordinate of the center of the sphere p minus the starting point (O-C) of the ray.
  3. b = op.dot(rd) refers to "D * (O-C)"
  4. Find det, here we should pay attention to the difference between the b in the principle and the b in the principle, so we can directly use
    double det = b * b-op.dot(op) + rad * rad;
    if det<0, there is no solution. Return 0 directly;
    otherwise, find the det of the root number;
  5. The final solution has one or two. It may be t = b-det, or t = b + det. Choose the t greater than 0 and the smaller of the two.

2. Draw

  1. Use 6 large spheres as a plane (DIFF attribute, only diffuse reflection), because if the radius is large, you will look at a close distance, the sphere is like a plane.
    The author should do this to avoid writing plane intersection, plane class and other functions.
  2. Use 1 ball to represent the light source, which is Lite, 1 Mirr ball (complete reflection), 1 Glass ball (both refraction and reflection) to
    traverse all the balls and find the intersection point
inline bool intersect(const Ray &r, double &t, int &id) {
    
    
    double n = sizeof(spheres) / sizeof(Sphere), d, inf = t = 1e20;
    for (int i = int(n); i--;) {
    
    
        if ((d = spheres[i].intersect(r)) && d < t) 
        {
    
    
            t = d;
            id = i;
        }
    }
    return t < inf;
}
  1. This ray shoots out and finds the intersection point in all the spheres.
  2. Find the intersection point closest to the camera. This is the main point to be drawn on the screen later.

3. Main function description

  1. The camera's position is at (50, 52, 295.6), looking in the negative direction of the z-axis.
int w = 1024, h = 768, samps = argc == 2 ? atoi(argv[1]) / 4 : 10; // # samples 
Ray cam(Vec(50, 52, 295.6), Vec(0, -0.042612, -1).norm()); // cam pos, dir 
Vec cx = Vec(w*.5135 / h), cy = (cx.cross(cam.d)).norm()*.5135, r, *c = new Vec[w*h];
  1. Traverse each pixel and use random sampling to find the direction d of the light to be emitted.
for (int y = 0; y<h; y++) {
    
                           // Loop over image rows
    fprintf(stderr, "\rRendering (%d spp) %5.2f%%", samps * 4, 100.*y / (h - 1));
    for (unsigned short x = 0, Xi[3] = {
    
     0,0,y*y*y }; x<w; x++)   // Loop cols 
        for (int sy = 0, i = (h - y - 1)*w + x; sy<2; sy++)     // 2x2 subpixel rows 
            for (int sx = 0; sx<2; sx++, r = Vec()) {
    
            // 2x2 subpixel cols 
                for (int s = 0; s<samps; s++) {
    
    
                    double r1 = 2 * erand48(Xi), dx = r1<1 ? sqrt(r1) - 1 : 1 - sqrt(2 - r1);
                    double r2 = 2 * erand48(Xi), dy = r2<1 ? sqrt(r2) - 1 : 1 - sqrt(2 - r2);
                    Vec d = cx*(((sx + .5 + dx) / 2 + x) / w - .5) +
                            cy*(((sy + .5 + dy) / 2 + y) / h - .5) + cam.d;
                    r = r + radiance(Ray(cam.o + d * 140, d.norm()), 0, Xi)*(1. / samps);
                    } // Camera rays are pushed ^^^^^ forward to start in interior 
                    c[i] = c[i] + Vec(clamp(r.x), clamp(r.y), clamp(r.z))*.25;
                }
    		}
    	}
    }
    FILE *f = fopen("image.ppm", "w");         // Write image to PPM file. 
    fprintf(f, "P3\n%d %d\n%d\n", w, h, 255);
    for (int i = 0; i<w*h; i++)
        fprintf(f, "%d %d %d ", toInt(c[i].x), toInt(c[i].y), toInt(c[i].z));
}

4. Recursive description of ray tracing

_Vector radiance: realizes the ray tracing processing flow, in which the function is called recursively. The termination condition of the recursive process of ray tracing is that the light does not intersect with any object in the environment, or intersects on a pure diffuse surface, and the brightness value returned by the traced light contributes little to the color of the pixel and has recursed to a given depth. The function passes in two parameters, one is the reference of the ray, and the other is the depth of the recursion;

First find the distance of the object that the ray intersects and the id of the object that intersects the ray. If there is no intersection, a vector of emission(0,0,0) is returned. If they intersect, find the point where the object was hit, calculate the normal vector, normal_real, and perform vector unitization. Then determine whether the recursion reaches a given depth, and the depth is greater than 100 to end. When the depth is greater than 5, a random floating-point number from 0-1 is compared with the maximum value P of the RGB color components. If the random number is less than P, the current color value is returned. Otherwise, calculations such as reflection and refraction are performed according to the material type of the sphere. Among them, the diffuse reflection takes a random number and three orthogonal vectors of w, u, and v to find a random diffuse reflection light, and iteratively continues. Specular reflection directly calculates the angle of the reflected light. Reflection plus refraction first determines whether normal and normal_real are in the same direction, then calculates the refractive index and the incident angle cosine, performs Fresnel refraction and reflection calculations, and finally returns the color value, using the roulette algorithm for recursive calls;

Set the recursive exit (the value of depth), intersect each sphere with the light, and make the normal vector and ray._direct at an obtuse angle (the normal vector points outside the sphere.

  1. Determine whether to intersect, find the point of intersection, find the surface normal
Vec radiance(const Ray &r, int depth, unsigned short *Xi) {
    
    
    double t;                               // distance to intersection 
    int id = 0;                               // id of intersected object 
    if (!intersect(r, t, id)) 
        return Vec(); // if miss, return black 
    const Sphere &obj = spheres[id];        // the hit object 
    Vec x = r.o + r.d*t, n = (x - obj.position).norm(); // calculate vector n,球面法向量
    Vec nl = n.dot(r.d) < 0 ? n : n*-1, f = obj.color;
    double p = f.x>f.y && f.x>f.z ? f.x : f.y>f.z ? f.y : f.z; // max refl 
    if (++depth>5||!p) 
        if (erand48(Xi)<p) 
            f = f*(1 / p); 
        else  
            return obj.emission; 
}
  1. Diffuse reflection (DIFF)
    If the material is diffuse reflection, then a random direction is generated for diffuse reflection.
    Use the normal vector w and the vector (0,1,0) or (1,0,0) to perform the cross product operation to obtain the vector u, then the cross product of w and u to obtain the vector v, and the direction of the cross product operation to obtain a Set the orthonormal basis w, u, v. Use the random function drand48() to get two random numbers r1, r2, and get 3 coordinates through the operation of the two, and then get a random vector direct under the orthonormal basis w, u, v, that is, get a random Diffuse the light and continue recursion.
if (obj.refl == DIFF) {
    
                      // Ideal DIFFUSE reflection 
    double r1 = 2 * M_PI*erand48(Xi), r2 = erand48(Xi), r2s = sqrt(r2);
    Vec w = nl, u = ((fabs(w.x)>.1 ? Vec(0, 1) : Vec(1)).cross(w)).norm(), v = w.cross(u);  //w,v,u为正交基
    Vec d = (u*cos(r1)*r2s + v*sin(r1)*r2s + w*sqrt(1 - r2)).norm();
    return obj.emission + f.mult(radiance(Ray(x, d), depth, Xi));
}
  1. Specular reflection (the material is SPEC)
    calculates the direction of the specular reflection, and then continues recursion.
    Since both diffuse reflection and specular reflection follow the reflection law, the direction of the reflected light is calculated according to the reflection law, and the recursion continues.
else if (obj.refl == SPEC)            // Ideal SPECULAR reflection 
    return obj.emission + f.mult(radiance(Ray(x, r.d - n * 2 * n.dot(r.d)), depth, Xi));
  1. Reflection and refraction (the material is REFR)
    glass material, part of the light is reflected, and part of the light is refracted.
    The roulette method is used here.
    First, calculate the relative refractive index. The sine of the refraction angle can be calculated by the formula n1sinn1 = n2 sinn2. At the same time, the direction of refraction can be calculated according to the direction of the incident light, the normal direction and the angle of refraction to generate refracted light; according to Fresnel The approximate equation can be calculated to calculate the proportion of Fresnel reflection and refraction (Fr+Fe = 1), so as to continue the recursion.
    Ray reflRay(x, rd-n * 2 * n.dot(rd)); // Ideal dielectric REFRACTION The direction of the reflected light is obtained by the parallelogram method
bool into = n.dot(nl)>0;                // Ray from outside going in? 
double nc = 1, nt = 1.5, nnt = into ? nc / nt : nt / nc, ddn = r.d.dot(nl), cos2t;
if ((cos2t = 1 - nnt*nnt*(1 - ddn*ddn))<0)    // Total internal reflection 
    return obj.emission + f.mult(radiance(reflRay, depth, Xi));
Vec tdir = (r.d*nnt - n*((into ? 1 : -1)*(ddn*nnt + sqrt(cos2t)))).norm();
double a = nt - nc, b = nt + nc, R0 = a*a / (b*b), c = 1 - (into ? -ddn : tdir.dot(n));
double Re = R0 + (1 - R0)*c*c*c*c*c, Tr = 1 - Re, P = .25 + .5*Re, RP = Re / P, TP = Tr / (1 - P);
return obj.emission + f.mult(depth>2 ? (erand48(Xi)<P ?   // Russian roulette 
radiance(reflRay, depth, Xi)*RP : radiance(Ray(x, tdir), depth, Xi)*TP) :
radiance(reflRay, depth, Xi)*Re + radiance(Ray(x, tdir), depth, Xi)*Tr);

5. Scene description

0.5135 sets the camera's viewing angle, that is, the larger the value, the larger the viewing angle, and the fatter the viewing cone. Sx, sy are the four vertices of the pixel grid. This method traverses each pixel and uses random sampling to find the direction d of the light to be emitted. The value range of (sx + .5 + dx) / 2 is [-0.25, 0.75]. This value is mainly used to offset x during random sampling. If we ignore him. Then the value of (((sx + .5 + dx) / 2 + x) / w-.5) is actually [-0.5, 0.5].

Regarding why we need to multiply d by 140 and add it to the origin of the camera, it is because the origin of the camera falls outside the "front" wall. If it is not added, all the light will directly hit this wall when it is emitted. On, directly returned to the color of the wall.

Note: The parameters that the model has determined:

Space viewpoint: (x_e, y_e, z_e) Viewing
distance: D =140 cos⁡θ (θ=0.5135)
Line of sight direction: eyedir(x_d, y_d, z_d) = (0, 0.042612, -1);
above the line of sight: eyeup (X_u, y_u, z_u) = (1, 0, 0) and eyedir cross product result Center of the
screen: opoint (x_o, y_o, z_o) = eyedir+D
eyedir;

6. Description of coordinate system:

From the camera coordinate system to the image coordinate system, it belongs to the perspective projection relationship, and it is converted from 3D to 2D.
Calculate the scale factor u to calculate the position coordinates of the projection point.
The scale factor is: u = (0.0-eyePos._z) / (A._z-eyePos._z);
Note: At this time, the unit of the projection point p is still mm, not pixel, and needs to be further converted to the pixel coordinate system.

Both the pixel coordinate system and the image coordinate system are on the imaging plane, but their origins and measurement units are different. The origin of the image coordinate system is the intersection of the camera's optical axis and the imaging plane, which is usually the midpoint of the imaging plane. The unit of the image coordinate system is mm. The origin is the upper left corner of the image, and the unit of the pixel coordinate system is pixel. We usually describe a pixel as several rows and several columns. So the conversion between the two is as follows: where dx and dy indicate how many mm each column and each row represent, that is, 1pixel=dx mm
for normalization

Lw = (width) / (140 * sin(0.5135));
Lh = (height) / (140 * sin(0.5135));

Note: At present, it is actually drawn on the sphere 140 from the eyePos point, not on the plane, so a certain proportion adjustment is required.

Lw = Lw* 0.5135 / sin(0.5135); //chord length than arc length
Lh = Lh* 0.5135 / sin(0.5135); //chord length than arc length

x[i+1] = (int)(((b._x - eyePos._x) * u + eyePos._x) * Lw + 0.5) + width / 2;
y[i+1] = height / 2 - (int)(((b._y - eyePos._y) * u + eyePos._y) * Lh + 0.5);

Because the upper left corner of the pixel coordinate system is the (0,0) point, and the image center of the image coordinate system is the origin, width/2 and height/2 are required for conversion.

Guess you like

Origin blog.csdn.net/qq_43405938/article/details/104251127