Guided filtering algorithm - OpenGL implementation

guided filtering

1. Introduction

  • Guided filtering, also known as guided filtering, reflects information such as edges and objects through a guide picture, and filters the input image so that the content of the output image is determined by the input image, but the texture is similar to the guide picture.

  • The principle of guided filtering is a local linear model. While maintaining the advantages of bilateral filtering (effectively maintaining edges, non-iterative calculation), the calculation speed is very fast, thereby overcoming the disadvantage of slow bilateral filtering.

  • Guided filtering (guided filtering) can not only achieve edge smoothing of bilateral filtering, but also perform well near detected edges, and can be applied to scenes such as image enhancement, HDR compression, image matting, and image defogging.

  • When performing edge-preserving filtering, the original image itself or its preprocessed image can be used as a guide picture.

Second, compare the advantages of bilateral filtering

  • 1. Guided filtering works better near the boundary than bilateral filtering; in addition, it also has the speed advantage of O(N) linear time. The bilateral filter has a very large computational complexity O(N^2), but the guided filter has a linear computational complexity because it does not use too complicated mathematical calculations.
    insert image description here
  • 2. In addition to the speed advantage, a good performance of the guided filter is that it can maintain the gradient, which cannot be done bilaterally, because there will be a phenomenon of gradient reversal. Since the guided filter is based on linear combination in mathematics, the gradient direction of the output image (Output Image) and the guide image (Guidance Image) are consistent, and there will be no problem of gradient inversion.
    insert image description here

3. The mathematical principle of guided filtering (more complex, you can skip this part)

This part refers to an article by Daxie: Guided Filter / Guided Filter (Guided Filter)

  • The idea of ​​guided filtering uses a guided image to generate weights to process the input image. This process can be expressed as the following formula:
    insert image description here
  • Among them, p is the input image, I is the guided image, and q is the output image. Here we think that the output image can be regarded as a local linear transformation of the guided map I, where k is the midpoint of the localized window, so the pixels belonging to the window ωk can pass through the corresponding pixels of the guided map (ak, bk) The coefficients are transformed and calculated. At the same time, we think that the input image p is obtained by adding noise or texture we don't want to q, so p = q + n.
    The next step is to solve such coefficients so that the difference between p and q is as small as possible, and the local linear model can also be maintained. Here, linear ridge regression (ridge regression) with regular terms is used
    insert image description here
  • Solve the above equations to obtain the local values ​​of a and b. For a required pixel, it may be contained in multiple windows, so after averaging:
    insert image description here
  • Finally get the algorithm:
    insert image description here

Fourth, OpenGL implements guided filtering

  • Here, the original image is regarded as the guide image. Then I=P in the above algorithm, so the variance and covariance are equal.

  • According to the above algorithm, since the guided filtering needs to draw the original pictures a and b respectively, it is also necessary to calculate the fuzzy maps of a and b. In order to save the number of drawing times, the entire large algorithm is divided into 2 renderings, the first rendering obtains a and b, and the second rendering obtains the final result q

  • The idea is as follows:
    insert image description here

  • In order to complete the guided filtering operation within two draw calls, a clever way is to draw the results of a and b in one drawing at the same time, and draw them to the left and right half of the canvas respectively (of course can be divided into upper and lower).

    • It should be noted that the width of this canvas needs to be twice the width of the original image, and you need to pay attention to the size of the texture when creating the texture
  • The following is the opengl shader drawn twice

  • GuidedSubFilter1 chip source shader:

precision highp float;

uniform sampler2D u_origin; // 原图
varying vec2 texcoordOut;

uniform vec2 offset;  // 单个像素步长
uniform float alpha;   // 模糊程度
uniform float eps;   // 正则化参数e

// 均值模糊,5*5
vec3 meanBlur(vec3 colors[25]) {
    
    
    highp vec3 sum = vec3(0.0);
    for (int i = 0; i < 25; i++) {
    
    
        sum += colors[i];
    }
    return sum * 0.04;
}

void main()
{
    
    
    // 因为这个shader最终画到一个 2*w, h 尺寸的一个FBO上,左边是导向滤波的a结果,右边是导向滤波的b结果,所以这里需要计算在原始纹理上真实的采样坐标
    highp vec2 originTexcoord;
    if (texcoordOut.x < 0.5) {
    
    
        originTexcoord = vec2(texcoordOut.x * 2.0, texcoordOut.y);
    } else {
    
    
        originTexcoord = vec2((texcoordOut.x - 0.5) * 2.0, texcoordOut.y);
    }
    
    // 采样原图 I 的 5*5 个点
    highp vec3 origin[25];
    
    origin[0] = texture2D(u_origin, originTexcoord).rgb;
    
    origin[1] = texture2D(u_origin, originTexcoord + vec2(offset.x, 0.0)).rgb;
    origin[2] = texture2D(u_origin, originTexcoord + vec2(-offset.x, 0.0)).rgb;
    origin[3] = texture2D(u_origin, originTexcoord + vec2(0.0, offset.y)).rgb;
    origin[4] = texture2D(u_origin, originTexcoord + vec2(0.0, -offset.y)).rgb;
    
    origin[5] = texture2D(u_origin, originTexcoord + vec2(offset.x, offset.y)).rgb;
    origin[6] = texture2D(u_origin, originTexcoord + vec2(offset.x, -offset.y)).rgb;
    origin[7] = texture2D(u_origin, originTexcoord + vec2(-offset.x, offset.y)).rgb;
    origin[8] = texture2D(u_origin, originTexcoord + vec2(-offset.x, -offset.y)).rgb;
    
    origin[9] = texture2D(u_origin, originTexcoord + vec2(2.0 * offset.x, 0)).rgb;
    origin[10] = texture2D(u_origin, originTexcoord + vec2(-2.0 * offset.x, 0)).rgb;
    origin[11] = texture2D(u_origin, originTexcoord + vec2(0, 2.0 * offset.y)).rgb;
    origin[12] = texture2D(u_origin, originTexcoord + vec2(0, -2.0 * offset.y)).rgb;
    
    origin[13] = texture2D(u_origin, originTexcoord + vec2(2.0 * offset.x, 2.0 * offset.y)).rgb;
    origin[14] = texture2D(u_origin, originTexcoord + vec2(2.0 * offset.x, -2.0 * offset.y)).rgb;
    origin[15] = texture2D(u_origin, originTexcoord + vec2(-2.0 * offset.x, 2.0 * offset.y)).rgb;
    origin[16] = texture2D(u_origin, originTexcoord + vec2(-2.0 * offset.x, -2.0 * offset.y)).rgb;
    
    origin[17] = texture2D(u_origin, originTexcoord + vec2(2.0 * offset.x, offset.y)).rgb;
    origin[18] = texture2D(u_origin, originTexcoord + vec2(-2.0 * offset.x, offset.y)).rgb;
    origin[19] = texture2D(u_origin, originTexcoord + vec2(offset.x, 2.0 * offset.y)).rgb;
    origin[20] = texture2D(u_origin, originTexcoord + vec2(-offset.x, 2.0 * offset.y)).rgb;
    
    origin[21] = texture2D(u_origin, originTexcoord + vec2(2.0 * offset.x, -offset.y)).rgb;
    origin[22] = texture2D(u_origin, originTexcoord + vec2(-2.0 * offset.x, -offset.y)).rgb;
    origin[23] = texture2D(u_origin, originTexcoord + vec2(offset.x, -2.0 * offset.y)).rgb;
    origin[24] = texture2D(u_origin, originTexcoord + vec2(-offset.x, -2.0 * offset.y)).rgb;
    
    // 计算原图的平方 I*I
    highp vec3 origin2[25];
    for (int i = 0; i < 25; i++) {
    
    
        origin2[i] = origin[i] * origin[i];
    }
    
    // 原图 I 的均值模糊
    highp vec3 originMean = meanBlur(origin);
    // 原图平方 I*I 的均值模糊
    highp vec3 origin2Mean = meanBlur(origin2);
    
    originMean = mix(origin[0], originMean, alpha);
    origin2Mean = mix(origin2[0], origin2Mean, alpha);
    
    // 原图模糊的平方
    highp vec3 originMean2 = originMean * originMean;
    
    // 计算方差(对于磨皮来说引导图和原图是同一个图,方差和协方差是同一个)
    highp vec3 variance = origin2Mean - originMean2;
    
    // 计算导向滤波的AB结果
    highp vec3 A = variance / (variance + eps);
    highp vec3 B = originMean - A * originMean;
    
    // 把AB分别写到图像的左半部分和右半部分
    if (texcoordOut.x < 0.5) {
    
    
        gl_FragColor = vec4((A + 1.0) * 0.5, 1.0);
    } else {
    
    
        gl_FragColor = vec4((B + 1.0) * 0.5, 1.0);
    }
    
}

  • GuidedSubFilter2 source shader:
precision highp float;

uniform sampler2D u_origin; // 原图
uniform sampler2D u_AB;    // guided1的结果,左边是导向滤波的a结果,右边是导向滤波的b结果
varying vec2 texcoordOut;

uniform vec2 offset;  // 单个像素步长
uniform float alpha;   // 模糊程度

// 均值模糊,5*5
vec3 meanBlur(vec3 colors[25]) {
    
    
    highp vec3 sum = vec3(0.0);
    for (int i = 0; i < 25; i++) {
    
    
        sum += colors[i];
    }
    return sum * 0.04;
}

void main()
{
    
    
    // 采样图 A 的 5*5 个点
    highp vec3 colorA[25];
    highp vec2 texcoordA = vec2(texcoordOut.x * 0.5, texcoordOut.y);
    
    colorA[0] = texture2D(u_AB, texcoordA).rgb;
    
    colorA[1] = texture2D(u_AB, texcoordA + vec2(offset.x, 0.0)).rgb;
    colorA[2] = texture2D(u_AB, texcoordA + vec2(-offset.x, 0.0)).rgb;
    colorA[3] = texture2D(u_AB, texcoordA + vec2(0.0, offset.y)).rgb;
    colorA[4] = texture2D(u_AB, texcoordA + vec2(0.0, -offset.y)).rgb;
    
    colorA[5] = texture2D(u_AB, texcoordA + vec2(offset.x, offset.y)).rgb;
    colorA[6] = texture2D(u_AB, texcoordA + vec2(offset.x, -offset.y)).rgb;
    colorA[7] = texture2D(u_AB, texcoordA + vec2(-offset.x, offset.y)).rgb;
    colorA[8] = texture2D(u_AB, texcoordA + vec2(-offset.x, -offset.y)).rgb;
    
    colorA[9] = texture2D(u_AB, texcoordA + vec2(2.0 * offset.x, 0)).rgb;
    colorA[10] = texture2D(u_AB, texcoordA + vec2(-2.0 * offset.x, 0)).rgb;
    colorA[11] = texture2D(u_AB, texcoordA + vec2(0, 2.0 * offset.y)).rgb;
    colorA[12] = texture2D(u_AB, texcoordA + vec2(0, -2.0 * offset.y)).rgb;
    
    colorA[13] = texture2D(u_AB, texcoordA + vec2(2.0 * offset.x, 2.0 * offset.y)).rgb;
    colorA[14] = texture2D(u_AB, texcoordA + vec2(2.0 * offset.x, -2.0 * offset.y)).rgb;
    colorA[15] = texture2D(u_AB, texcoordA + vec2(-2.0 * offset.x, 2.0 * offset.y)).rgb;
    colorA[16] = texture2D(u_AB, texcoordA + vec2(-2.0 * offset.x, -2.0 * offset.y)).rgb;
    
    colorA[17] = texture2D(u_AB, texcoordA + vec2(2.0 * offset.x, offset.y)).rgb;
    colorA[18] = texture2D(u_AB, texcoordA + vec2(-2.0 * offset.x, offset.y)).rgb;
    colorA[19] = texture2D(u_AB, texcoordA + vec2(offset.x, 2.0 * offset.y)).rgb;
    colorA[20] = texture2D(u_AB, texcoordA + vec2(-offset.x, 2.0 * offset.y)).rgb;
    
    colorA[21] = texture2D(u_AB, texcoordA + vec2(2.0 * offset.x, -offset.y)).rgb;
    colorA[22] = texture2D(u_AB, texcoordA + vec2(-2.0 * offset.x, -offset.y)).rgb;
    colorA[23] = texture2D(u_AB, texcoordA + vec2(offset.x, -2.0 * offset.y)).rgb;
    colorA[24] = texture2D(u_AB, texcoordA + vec2(-offset.x, -2.0 * offset.y)).rgb;
    
    // 采样图 B 的 5*5 个点
    highp vec3 colorB[25];
    highp vec2 texcoordB = vec2(texcoordOut.x * 0.5 + 0.5, texcoordOut.y);
    
    colorB[0] = texture2D(u_AB, texcoordB).rgb;
    
    colorB[1] = texture2D(u_AB, texcoordB + vec2(offset.x, 0.0)).rgb;
    colorB[2] = texture2D(u_AB, texcoordB + vec2(-offset.x, 0.0)).rgb;
    colorB[3] = texture2D(u_AB, texcoordB + vec2(0.0, offset.y)).rgb;
    colorB[4] = texture2D(u_AB, texcoordB + vec2(0.0, -offset.y)).rgb;
    
    colorB[5] = texture2D(u_AB, texcoordB + vec2(offset.x, offset.y)).rgb;
    colorB[6] = texture2D(u_AB, texcoordB + vec2(offset.x, -offset.y)).rgb;
    colorB[7] = texture2D(u_AB, texcoordB + vec2(-offset.x, offset.y)).rgb;
    colorB[8] = texture2D(u_AB, texcoordB + vec2(-offset.x, -offset.y)).rgb;
    
    colorB[9] = texture2D(u_AB, texcoordB + vec2(2.0 * offset.x, 0)).rgb;
    colorB[10] = texture2D(u_AB, texcoordB + vec2(-2.0 * offset.x, 0)).rgb;
    colorB[11] = texture2D(u_AB, texcoordB + vec2(0, 2.0 * offset.y)).rgb;
    colorB[12] = texture2D(u_AB, texcoordB + vec2(0, -2.0 * offset.y)).rgb;
    
    colorB[13] = texture2D(u_AB, texcoordB + vec2(2.0 * offset.x, 2.0 * offset.y)).rgb;
    colorB[14] = texture2D(u_AB, texcoordB + vec2(2.0 * offset.x, -2.0 * offset.y)).rgb;
    colorB[15] = texture2D(u_AB, texcoordB + vec2(-2.0 * offset.x, 2.0 * offset.y)).rgb;
    colorB[16] = texture2D(u_AB, texcoordB + vec2(-2.0 * offset.x, -2.0 * offset.y)).rgb;
    
    colorB[17] = texture2D(u_AB, texcoordB + vec2(2.0 * offset.x, offset.y)).rgb;
    colorB[18] = texture2D(u_AB, texcoordB + vec2(-2.0 * offset.x, offset.y)).rgb;
    colorB[19] = texture2D(u_AB, texcoordB + vec2(offset.x, 2.0 * offset.y)).rgb;
    colorB[20] = texture2D(u_AB, texcoordB + vec2(-offset.x, 2.0 * offset.y)).rgb;
    
    colorB[21] = texture2D(u_AB, texcoordB + vec2(2.0 * offset.x, -offset.y)).rgb;
    colorB[22] = texture2D(u_AB, texcoordB + vec2(-2.0 * offset.x, -offset.y)).rgb;
    colorB[23] = texture2D(u_AB, texcoordB + vec2(offset.x, -2.0 * offset.y)).rgb;
    colorB[24] = texture2D(u_AB, texcoordB + vec2(-offset.x, -2.0 * offset.y)).rgb;
    
    // 分别对图A和图B做均值模糊
    highp vec3 meanA = meanBlur(colorA);
    highp vec3 meanB = meanBlur(colorB);
    
    meanA = meanA * 2.0 - 1.0;
    meanB = meanB * 2.0 - 1.0;
    
    meanA = mix(colorA[0] * 2.0 - 1.0, meanA, alpha);
    meanB = mix(colorB[0] * 2.0 - 1.0, meanB, alpha);
    
    // 导向滤波的最后一步融合
    highp vec3 originColor = texture2D(u_origin, texcoordOut).rgb;
    highp vec3 resultColor = meanA * originColor + meanB;
    resultColor = mix(originColor, resultColor, alpha);
    
    gl_FragColor = vec4(resultColor, 1.0);
}

Plot the result:

Regularization parameter eps=0.02Regularization parameter eps=0.02
Regularization parameter eps=0.02
Regularization parameter eps=0.02

Dynamic effect display

  • 1. Fix the degree of blur (0.5), modify the regularization parameter eps (variation range 0~0.1)
    Please add a picture description
  • 2. Fixed regularization parameter eps (0.05), modify the blur degree 0~1
    Please add a picture description

Source code Git: https://github.com/sysu-huangwei/GuidedFilter

Guess you like

Origin blog.csdn.net/q345911572/article/details/128799773