GAMES101 homework 3 - detailed understanding of the code process

Table of contents

Work requirements

supplementary code

code frame preview

rasterize_triangle()

result

interpolated--do interpolation

Regarding the shadingcoords that appear in the interpolation attribute in the code

phong_fragment_shader()

result 

texture_fragment_shader()

result

 bump_fragment_shader()

Put the code first

Detailed explanation of each step

What are kh and kn

 Why not u+1 but u+1/w

.norm()

result

 some words

displacement_fragment_shader() 

the code

result


Work requirements

supplementary code

code frame preview

If you do not modify anything, you will be prompted after running:

Start adding code below.

rasterize_triangle()

Implement normal vector, color, texture interpolation. 

//Screen space rasterization
void rst::rasterizer::rasterize_triangle(const Triangle& t, const std::array<Eigen::Vector3f, 3>& view_pos) 
{
    //构建bounding box
    // 这里跟作业2里的深度插值差不多,v.w()就是该顶点深度值,用Z和zp代替w_reciprocal和z_interpolated
    auto v = t.toVector4();

    int min_x = std::min(std::min(v[0].x(), v[1].x()), v[2].x());
    int min_y = std::min(std::min(v[0].y(), v[1].y()), v[2].y());
    int max_x = std::max(std::max(v[0].x(), v[1].x()), v[2].x());
    int max_y = std::max(std::max(v[0].y(), v[1].y()), v[2].y());

    for (int x = min_x; x <= max_x; x++) {
        for (int y = min_y; y <= max_y; y++) {
            //判断是否在三角形内
            if (insideTriangle(x + 0.5, y + 0.5, t.v)) {
                auto [alpha, beta, gamma] = computeBarycentric2D(x + 0.5, y + 0.5, t.v);//为了获得该点的z值
                //进行深度插值之前,要对重心坐标进行透视矫正
                float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
                float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
                zp *= Z;

                if (zp < depth_buf[cur_index]) {
                    int cur_index = get_index(x, y);
                    depth_buf[cur_index] = zp;

                    // TODO: Interpolate the attributes:
                    // auto interpolated_color 颜色
                    // auto interpolated_normal 法向量
                    // auto interpolated_texcoords 纹理颜色
                    // auto interpolated_shadingcoords camera space的像素位置,为了求r和向量l
                    //这里的w取值都是1,感觉没有做透视矫正
                    auto interpolated_color = interpolate(alpha, beta, gamma, t.color[0], t.color[1], t.color[2], 1);
                    auto interpolated_normal = interpolate(alpha, beta, gamma, t.normal[0], t.normal[1], t.normal[2], 1);
                    auto interpolated_texcoords = interpolate(alpha, beta, gamma, t.tex_coords[0], t.tex_coords[1], t.tex_coords[2], 1);
                    auto interpolated_shadingcoords = interpolate(alpha, beta, gamma, view_pos[0], view_pos[1], view_pos[2], 1);
                    // Use: fragment_shader_payload payload( interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
                    // Use: payload.view_pos = interpolated_shadingcoords;
                    // Use: Instead of passing the triangle's color directly to the frame buffer, pass the color to the shaders first to get the final color;
                    // Use: auto pixel_color = fragment_shader(payload);
                    fragment_shader_payload payload(interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
                    payload.view_pos = interpolated_shadingcoords;
                    auto pixel_color = fragment_shader(payload);
                    //作业2中的set_pixel输入的是vector3f点坐标,这次作业的是vector2i坐标,因此直接输入x,y坐标即可
                    Vector2i vertex;
                    vertex << x, y;
                    set_pixel(vertex, pixel_color);
                }
            }
        }
    }
}

result

interpolate d--do interpolation

In the lesson of games101 shading3, the interpolation that can find any attribute is introduced. VA, VB, VC can be point color, position, texture color, etc. 

Regarding the shadingcoords that appear in the interpolation attribute in the code

Reference: Assignment 3 interpolated_shadingcoords – Computer Graphics and Mixed Reality Online Platform (games-cn.org)

interpolated_shadingcoords , the coordinates obtained after this interpolation are the coordinates of the point we want to shade, so there are "interpolated" and "shading", so why do we need to get this coordinate? Can't I use the previous one? no!

(1) Why?

We project the 3D model onto the screen through MVP transformation. The projection process transforms the real-world space (cone/ camera space ) into a cuboid ( screen space ). At this time, the pixel coordinates are no longer in the real world. Then do depth interpolation to get the depth value z corresponding to each pixel (x, y), that is, get (x, y, z) , which is the view_pos in homework 3 , but the obtained coordinate value is also in the screen space Up! What we need is the pixel position of the real world, that is, the position of the camera space -- shading coords.

(2) How to get it?

Like other attributes such as color, this coordinate can be obtained by interpolation.

(3) What can this point do?

Remember the Bling Phong reflection model ? The r in "I/r²" is the distance from the shading point to the light source, which is the distance in the camera space, so this coordinate is needed to find the size of r; at the same time, the light source direction vector  l can also be found.

phong_fragment_shader()

Calculate specular light Ls , diffuse light Ld and ambient light La.

Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = payload.color;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{
   
   {20, 20, 20}, {500, 500, 500}}; //light是之前定义的struct包含了position和intensity
    auto l2 = light{
   
   {-20, 20, 0}, {500, 500, 500}};

    std::vector<light> lights = {l1, l2};
    Eigen::Vector3f amb_light_intensity{10, 10, 10};
    Eigen::Vector3f eye_pos{0, 0, 10};

    float p = 150;

    Eigen::Vector3f color = payload.color;
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    Eigen::Vector3f result_color = {0, 0, 0};
    for (auto& light : lights)
    {

        // TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular* 
         // components are. Then, accumulate that result on the *result_color* object.

        //向量l,v,h
        Eigen::Vector3f light_vector = (light.position - point).normalized();//得到后还须归一化
        Eigen::Vector3f view_vector = (eye_pos - point).normalized();
        Eigen::Vector3f half_vector = (light_vector + view_vector).normalized();
        Eigen::Vector3f n_vector = normal.normalized();

        //光源到物体的距离————light到point的
        float r2 = (light.position - point).dot(light.position - point);//利用了 a·b/|a||b|=cos<a,b>

        //ambient 环境光
        Eigen::Vector3f la = ka.cwiseProduct(amb_light_intensity);
        //diffuse 漫反射
        Eigen::Vector3f ld = kd.cwiseProduct(light.intensity / r2)*std::max(0.0f, n_vector.dot(light_vector));
        //specular 高光
        Eigen::Vector3f ls = ks.cwiseProduct(light.intensity / r2) * std::pow(std::max(0.0f, n_vector.dot(half_vector)),p);

        result_color += la + ld + ls;
    
    }

    return result_color * 255.f;
}

result 

texture_fragment_shader()

To take the coordinates of the texture, perform the same lighting operation as phong shading.

Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
    Eigen::Vector3f return_color = {0, 0, 0};
    if (payload.texture)
    {
        // TODO: Get the texture value at the texture coordinates of the current fragment
        //getcolor返回的是color[0][1][2]
        //fragment_shader_payload在shader头文件中定义的,其中给出了定义tex
        return_color = payload.texture->getColor(payload.tex_coords.x(), payload.tex_coords.y());
    }
    Eigen::Vector3f texture_color;
    texture_color << return_color.x(), return_color.y(), return_color.z();

    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = texture_color / 255.f;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{
   
   {20, 20, 20}, {500, 500, 500}};
    auto l2 = light{
   
   {-20, 20, 0}, {500, 500, 500}};

    std::vector<light> lights = {l1, l2};
    Eigen::Vector3f amb_light_intensity{10, 10, 10};
    Eigen::Vector3f eye_pos{0, 0, 10};

    float p = 150;

    Eigen::Vector3f color = texture_color;
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    Eigen::Vector3f result_color = {0, 0, 0};

    for (auto& light : lights)
    {
        // TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular* 
        // components are. Then, accumulate that result on the *result_color* object.

       //向量l,v,h
        Eigen::Vector3f light_vector = (light.position - point).normalized();//得到后还须归一化
        Eigen::Vector3f view_vector = (eye_pos - point).normalized();
        Eigen::Vector3f half_vector = (light_vector + view_vector).normalized();
        Eigen::Vector3f n_vector = normal.normalized();

        //光源到物体的距离————light到point的
        float r2 = (light.position - point).dot(light.position - point);//利用了 a·b/|a||b|=cos<a,b>

        //ambient 环境光
        Eigen::Vector3f la = ka.cwiseProduct(amb_light_intensity);
        //diffuse 漫反射
        Eigen::Vector3f ld = kd.cwiseProduct(light.intensity / r2) * std::max(0.0f, n_vector.dot(light_vector));
        //specular 高光
        Eigen::Vector3f ls = ks.cwiseProduct(light.intensity / r2) * std::pow(std::max(0.0f, n_vector.dot(half_vector)), p);

        result_color += la + ld + ls;

    }

    return result_color * 255.f;
}

It is also necessary to modify the texture color interface getColor(), modify it in the texture.hpp header file, and add coordinate restrictions:

Referenced: Games101-Homework 3 - Zhihu

Eigen::Vector3f getColor(float u, float v)
    {
        // 坐标限定
        if (u < 0) u = 0;
        if (u > 1) u = 1;
        if (v < 0) v = 0;
        if (v > 1) v = 1;
        auto u_img = u * width;
        auto v_img = (1 - v) * height;
        auto color = image_data.at<cv::Vec3b>(v_img, u_img);
        return Eigen::Vector3f(color[0], color[1], color[2]);
    }

result

 bump_fragment_shader()

To be honest, I wrote this part for a long time, trying to figure out every step. The following is my detailed thinking process of writing code:

Put the code first

Eigen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{
    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = payload.color;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{ {20, 20, 20}, {500, 500, 500} };
    auto l2 = light{ {-20, 20, 0}, {500, 500, 500} };

    std::vector<light> lights = { l1, l2 };
    Eigen::Vector3f amb_light_intensity{ 10, 10, 10 };
    Eigen::Vector3f eye_pos{ 0, 0, 10 };

    float p = 150;

    Eigen::Vector3f color = payload.color;
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    float kh = 0.2, kn = 0.1;//常量影响系数,类比课上提到的c1和c2

    // Let n = normal = (x, y, z)
    // Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
    // Vector b = n cross product t
    // Matrix TBN = [t b n]
    float x = normal.x();
    float y = normal.y();
    float z = normal.z();
    Eigen::Vector3f t, b;
    t << x * y / std::sqrt(x * x + z * z), std::sqrt(x * x + z * z), z* y / std::sqrt(x * x + z * z);
    b = normal.cross(t);
    Eigen::Matrix3f TBN;
    TBN <<
        t.x(), b.x(), x,
        t.y(), b.y(), y,
        t.z(), b.z(), z;
    
    float u = payload.tex_coords.x();
    float v = payload.tex_coords.y();
    float w = payload.texture->width;
    float h = payload.texture->height;
    
    // dU = kh * kn * (h(u+1/w,v)-h(u,v))
    // dV = kh * kn * (h(u,v+1/h)-h(u,v))
    float dU = kh * kn * (payload.texture->getColor(u + 1.0f / w, v).norm() - payload.texture->getColor(u, v).norm());
    float dV = kh * kn * (payload.texture->getColor(u, v + 1.0f / h).norm() - payload.texture->getColor(u, v).norm());
   
    // Vector ln = (-dU, -dV, 1)
    // Normal n = normalize(TBN * ln)
    Eigen::Vector3f ln;
    ln<<-dU, -dV, 1.0f;
    normal = (TBN * ln).normalized();

    Eigen::Vector3f result_color = {0, 0, 0};
    result_color = normal;

    return result_color * 255.f;
}

Detailed explanation of each step

Eigen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{
    //fragment_shader_payload是shader.hpp文件中定义的一个结构体struct
    //法线贴图
    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = payload.color;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{ {20, 20, 20}, {500, 500, 500} };
    auto l2 = light{ {-20, 20, 0}, {500, 500, 500} };

    std::vector<light> lights = { l1, l2 };
    Eigen::Vector3f amb_light_intensity{ 10, 10, 10 };
    Eigen::Vector3f eye_pos{ 0, 0, 10 };

    float p = 150;

    Eigen::Vector3f color = payload.color;
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    float kh = 0.2, kn = 0.1;//常量影响系数,类比课上提到的c1和c2

    // Let n = normal = (x, y, z)
    // Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
    // Vector b = n cross product t
    // Matrix TBN = [t b n]
    float x = normal.x();
    float y = normal.y();
    float z = normal.z();
    Eigen::Vector3f t, b;
    t << x * y / std::sqrt(x * x + z * z), std::sqrt(x * x + z * z), z* y / std::sqrt(x * x + z * z);
    b = normal.cross(t);
    Eigen::Matrix3f TBN;
    TBN <<
        t.x(), b.x(), x,
        t.y(), b.y(), y,
        t.z(), b.z(), z;
    
    //下一步是要干什么呢?完成老师上课Texture的凹凸贴图部分推导的公式
    
    //第一步,先把求du和dv公式里的u,v,w,h分别定义好
    //payload这个shader定义的struct里定义了贴图坐标tex_coords,u,v就分别是坐标的x和y值
    float u = payload.tex_coords.x();
    float v = payload.tex_coords.y();
    //同时width和height是texture里定义的纹理贴图的宽(对应列)和高(对应行)
    float w = payload.texture->width;
    float h = payload.texture->height;
    
    //开始根据公式求du和dv
    // dU = kh * kn * (h(u+1/w,v)-h(u,v)),
    // dV = kh * kn * (h(u,v+1/h)-h(u,v))
    //kh*kn是影响系数(是常数,上面已经定义了值),表示纹理法线对真实物体的影响程度,和课上的c1c2是同一个东西
    //h()是高度,在法线贴图里高度则表示坐标(u,v)对应顶点的颜色(RGB值)
    //getColor返回的是一个vector3f,0/1/2分别表示RGB值
    float dU = kh * kn * (payload.texture->getColor(u + 1.0f / w, v).norm() - payload.texture->getColor(u, v).norm());
    float dV = kh * kn * (payload.texture->getColor(u, v + 1.0f / h).norm() - payload.texture->getColor(u, v).norm());
    /*一步步的拆分:
      1.kh*kn就不解释了,上面有说
      2.payload是所写函数输入的一个struct,这个struct包含了texture等信息,这个结构是在shader.hpp中定义的
      3.payload.texture ———— 取payload这个struct里的texture
        texture ———— 是Texture.hpp中定义的一个class,里面包含了纹理的宽高(width/height)和getColor()函数等信息
        payload.texture->getColor() ———— 访问到定义的这个getColor()函数
      4.为什么要用“u+1.0f/w”而不是直接“u+1”?我们仔细看Texture.hpp对getColor()的一段定义:
        ...
        auto u_img = u * width;
        auto v_img = (1 - v) * height;
        auto color = image_data.at<cv::Vec3b>(v_img, u_img);
        return Eigen::Vector3f(color[0], color[1], color[2]);
        ...
        这里的u v值都×了纹理对应的宽高,变换过来的话移动一个单位应该是“u*width+1”因此在我们的函数里1个单位对应的应该是1/width,1/h同理
      5.getColor().norm() ———— .norm()是Eigen库里定义的一个求范数的函数,就是求所有元素²的和再开方。
        向量的范数则表示的是原有集合的大小,范数的本质是距离,存在的意义是为了实现比较。
        这部分为什么要给个norm,我的理解是:getColor返回的是一个储存颜色值的向量:(color[0],color[1],color[2])对应的是RGB值
        dU和dV都是一个float值,并不是Vector,想要实现h()表示的实数高度值,就要用到norm.()将向量映射成实数(个人理解,不确定对不对)
      6.还需要注意,这里的dU和dV对应的是老师课上给的dp/du和dp/dp/dv
    */

    // Vector ln = (-dU, -dV, 1)
    // Normal n = normalize(TBN * ln)
    Eigen::Vector3f ln;
    ln<<-dU, -dV, 1.0f;
    normal = (TBN * ln).normalized();

    Eigen::Vector3f result_color = {0, 0, 0};
    result_color = normal;

    return result_color * 255.f;
}

What are kh and kn

It is the influence coefficient (it is a constant, and the value has been defined in the code framework given by the teacher), indicating the influence of the texture normal on the real object, and it should have the same meaning as the c1c2 in the class.

 Why not u+1 but u+1/w

Let's take a closer look at a definition of getColor() in Texture.hpp:

 Eigen::Vector3f getColor(float u, float v)
    {
        // 坐标限定
        if (u < 0) u = 0;
        if (u > 1) u = 1;
        if (v < 0) v = 0;
        if (v > 1) v = 1;
        auto u_img = u * width;
        auto v_img = (1 - v) * height;
        auto color = image_data.at<cv::Vec3b>(v_img, u_img);
        return Eigen::Vector3f(color[0], color[1], color[2]);
    }

The uv value here is × the width and height corresponding to the texture. If it is transformed, one unit of movement should be "u*width+1". Therefore, in our function, one unit corresponds to 1/width, and 1/h is the same. .

.norm()

.norm() is a function to find the norm defined in the Eigen library, which is to find the sum of all elements² and then the square root. The norm of the vector represents the size of the original set, the essence of the norm is the distance, and the meaning of existence is to realize the comparison.

As for why this part should be given a norm, my understanding is: getColor returns a vector that stores color values ​​(color[0], color[1], color[2]), corresponding to RGB values, dU and dV They are all float values, not Vector3f. If you want to realize the real height value represented by h(), you need to use norm.() to map the vector into a real number (personal understanding, not sure if it is right )

result

 some words

It is not easy to understand the texture completely. I have also put in a lot of effort but I only understand the surface. One thing I am not very clear about is that although the name of this function is Bump Map, what it does seems to be similar to Normal Map The same as the normal map , I don't know if I understand it wrong or this is the normal map of the expression.

I have also done some sorting out some learning about bump map/normal map/displacement map. If you are interested, you can go and have a look:

GAMES101 extension - detailed study of normal map_flashinggg's blog

displacement_fragment_shader() 

Compared with bump, the displacement map has one more step to modify the point :

point += kn * normal * payload.texture->getColor(u, v).norm();

 The height value of the vertex coordinates is directly modified.

the code

Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{
    
    Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
    Eigen::Vector3f kd = payload.color;
    Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);

    auto l1 = light{
   
   {20, 20, 20}, {500, 500, 500}};
    auto l2 = light{
   
   {-20, 20, 0}, {500, 500, 500}};

    std::vector<light> lights = {l1, l2};
    Eigen::Vector3f amb_light_intensity{10, 10, 10};
    Eigen::Vector3f eye_pos{0, 0, 10};

    float p = 150;

    Eigen::Vector3f color = payload.color; 
    Eigen::Vector3f point = payload.view_pos;
    Eigen::Vector3f normal = payload.normal;

    float kh = 0.2, kn = 0.1;
    
    // TODO: Implement displacement mapping here
    // Let n = normal = (x, y, z)
    // Vector t = (x*y/sqrt(x*x+z*z),sqrt(x*x+z*z),z*y/sqrt(x*x+z*z))
    // Vector b = n cross product t
    // Matrix TBN = [t b n]
    // dU = kh * kn * (h(u+1/w,v)-h(u,v))
    // dV = kh * kn * (h(u,v+1/h)-h(u,v))
    // Vector ln = (-dU, -dV, 1)
    // Position p = p + kn * n * h(u,v)
    // Normal n = normalize(TBN * ln)
    float x = normal.x();
    float y = normal.y();
    float z = normal.z();
    Eigen::Vector3f t, b;
    t << x * y / std::sqrt(x * x + z * z), std::sqrt(x * x + z * z), z* y / std::sqrt(x * x + z * z);
    b = normal.cross(t);
    Eigen::Matrix3f TBN;
    TBN <<
        t.x(), b.x(), x,
        t.y(), b.y(), y,
        t.z(), b.z(), z;

    float u = payload.tex_coords.x();
    float v = payload.tex_coords.y();
    float w = payload.texture->width;
    float h = payload.texture->height;

    // dU = kh * kn * (h(u+1/w,v)-h(u,v)),
    // dV = kh * kn * (h(u,v+1/h)-h(u,v))
    float dU = kh * kn * (payload.texture->getColor(u + 1.0f / w, v).norm() - payload.texture->getColor(u, v).norm());
    float dV = kh * kn * (payload.texture->getColor(u, v + 1.0f / h).norm() - payload.texture->getColor(u, v).norm());

    // Vector ln = (-dU, -dV, 1)
    // Position p = p + kn * n * h(u,v)
    // Normal n = normalize(TBN * ln)
    Eigen::Vector3f ln;
    ln << -dU, -dV, 1.0f;

    normal = (TBN * ln).normalized();
    point += kn * normal * payload.texture->getColor(u, v).norm();
    Eigen::Vector3f result_color = {0, 0, 0};

    for (auto& light : lights)
    {
        // TODO: For each light source in the code, calculate what the *ambient*, *diffuse*, and *specular* 
        // components are. Then, accumulate that result on the *result_color* object.
        //向量l,v,h
        Eigen::Vector3f light_vector = (light.position - point).normalized();//得到后还须归一化
        Eigen::Vector3f view_vector = (eye_pos - point).normalized();
        Eigen::Vector3f half_vector = (light_vector + view_vector).normalized();
        Eigen::Vector3f n_vector = normal.normalized();

        //光源到物体的距离————light到point的
        float r2 = (light.position - point).dot(light.position - point);//利用了 a·b/|a||b|=cos<a,b>

        //ambient 环境光
        Eigen::Vector3f la = ka.cwiseProduct(amb_light_intensity);
        //diffuse 漫反射
        Eigen::Vector3f ld = kd.cwiseProduct(light.intensity / r2) * std::max(0.0f, n_vector.dot(light_vector));
        //specular 高光
        Eigen::Vector3f ls = ks.cwiseProduct(light.intensity / r2) * std::pow(std::max(0.0f, n_vector.dot(half_vector)), p);

        result_color += la + ld + ls;

    }

    return result_color * 255.f;
}

result

Guess you like

Origin blog.csdn.net/qq_41835314/article/details/124666935#comments_27505906