OpenGL.Shader: Zhige teaches you to write a live filter client (10) Visual filter: Gaussian filter / Gaussian blur principle realization

OpenGL.Shader: Zhige teaches you to write a live filter client (10) The realization of Gaussian filter/Gaussian blur principle

1. Filtering without "moderate"

The principle in the previous chapter realized the mean filtering. As mentioned in the introduction of image filtering, mean filtering is the simplest kind of image low-pass filter, which can filter out uniform noise and Gaussian noise, but will cause a certain degree of blur to the image. It is a method of averaging the pixels in the specified area of ​​the picture. The disadvantage of the averaging filter is that it will blur the image, because it averages all the points. In fact, in most cases, the proportion of noise is small, and all points are processed with the same weight, which will inevitably lead to blurred images. Moreover, the larger the width of this filter, the more blurred the filtered picture, which means that the details of the image are lost, making the image more "moderate".

So how can we achieve image filtering without "moderate"? As long as the weight of the filter is changed, the parameters of this filter are modified according to the Gaussian distribution form, then this filter is called Gaussian filter, also called Gaussian blur.

2. Normal distribution, Gaussian function

Here is a brief introduction to the deduction algorithm of Gaussian normal distribution and the evolution of its code.

What is the normal distribution refers to the normal distribution, the closer to the center point, the larger the value, and the farther away from the center, the smaller the value .
When calculating the average value, we only need to use the "center point" as the origin, and assign weights to other points according to their positions on the normal curve to obtain a weighted average. The normal distribution is obviously a desirable weight distribution model.

Understand what is the normal distribution, the next class needs to use Gaussian function to realize the mathematical transformation process.
The normal distribution above is one-dimensional, but for images are two-dimensional, so we need a two-dimensional normal distribution.

The density function of the normal distribution is called the "Gaussian function". Its two-dimensional form is:

Get weight matrix

Assuming that the coordinates of the center point are (0,0), the coordinates of the 8 closest points are as follows:

The larger the range, the farther point and so on.
In order to calculate the weight matrix, the value of σ needs to be set. Assuming σ=1.5 , the weight matrix with a blur radius of 1 is as follows: (bring the coordinate values ​​into the Gaussian formula)

The sum of the weights of these 9 points is equal to 0.4787147. If only the weighted average of these 9 points is calculated, the sum of their weights must be equal to 1. Therefore, the above 9 values ​​have to be divided by 0.4787147 to obtain the final weight matrix.

The process of dividing by the total value is also called the "normalization problem". The goal is to make the total weight of the filter equal to 1. Otherwise, using a filter with a total value greater than 1 will make the image brighter, and a filter less than 1 will make the image darker. So far we have obtained the third-order Gaussian convolution kernel.

3. GL shader multi-index transfer

The next step is to implement this Gaussian blur filter on GL.Shader. In fact, the implementation idea is the same as the mean blur in the previous chapter, but this time we will introduce the index array transfer of the shader. Simplify the shader code.

The first is the fixed-point shader. Pay attention to the writing of the reference array, there is nothing to say, pay attention to the matrix position and the array index position to correspond well.

attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform float widthFactor;
uniform float heightFactor;
uniform float offset; // 采样点半径
const int GAUSSIAN_SAMPLES = 9;
varying vec2 textureCoordinate[GAUSSIAN_SAMPLES];
void main()
{
    gl_Position = position;
    vec2 widthStep = vec2(offset*widthFactor, 0.0);
    vec2 heightStep = vec2(0.0, offset*heightFactor);
    textureCoordinate[0] = inputTextureCoordinate.xy - heightStep - widthStep; // 左上
    textureCoordinate[1] = inputTextureCoordinate.xy - heightStep; // 上
    textureCoordinate[2] = inputTextureCoordinate.xy - heightStep + widthStep; // 右上
    textureCoordinate[3] = inputTextureCoordinate.xy - widthStep; // 左中
    textureCoordinate[4] = inputTextureCoordinate.xy; // 中
    textureCoordinate[5] = inputTextureCoordinate.xy + widthStep; // 右中
    textureCoordinate[6] = inputTextureCoordinate.xy + heightStep - widthStep; // 左下
    textureCoordinate[7] = inputTextureCoordinate.xy + heightStep; // 下
    textureCoordinate[8] = inputTextureCoordinate.xy + heightStep + widthStep; // 右下
}

Then there is the fragment shader. We refer to the convolution kernel convolutionMatrix from the outside, and pass in the convolution kernel of the Gaussian filter just calculated from the external C++ layer code. In fact, it can be statically written in the shader like the conversion matrix of yuv2rgb. It is only convenient to debug & enhance readability in external code transfer.

precision highp float;
uniform sampler2D SamplerY;
uniform sampler2D SamplerU;
uniform sampler2D SamplerV;
mat3 colorConversionMatrix = mat3(
                   1.0, 1.0, 1.0,
                   0.0, -0.39465, 2.03211,
                   1.13983, -0.58060, 0.0);
vec3 yuv2rgb(vec2 pos)
{
   vec3 yuv;
   yuv.x = texture2D(SamplerY, pos).r;
   yuv.y = texture2D(SamplerU, pos).r - 0.5;
   yuv.z = texture2D(SamplerV, pos).r - 0.5;
   return colorConversionMatrix * yuv;
}
uniform mediump mat3 convolutionMatrix;
const int GAUSSIAN_SAMPLES = 9;
varying vec2 textureCoordinate[GAUSSIAN_SAMPLES];
void main()
{
    //mediump vec3 topLeftColor     = yuv2rgb(textureCoordinate[0]);
    //mediump vec3 topColor         = yuv2rgb(textureCoordinate[1]);
    //mediump vec3 topRightColor    = yuv2rgb(textureCoordinate[2]);
    //mediump vec3 leftColor        = yuv2rgb(textureCoordinate[3]);
    //mediump vec3 centerColor      = yuv2rgb(textureCoordinate[4]);
    //mediump vec3 rightColor       = yuv2rgb(textureCoordinate[5]);
    //mediump vec3 bottomLeftColor  = yuv2rgb(textureCoordinate[6]);
    //mediump vec3 bottomColor      = yuv2rgb(textureCoordinate[7]);
    //mediump vec3 bottomRightColor = yuv2rgb(textureCoordinate[8]);
    vec3 fragmentColor = (yuv2rgb(textureCoordinate[0]) * convolutionMatrix[0][0]);
    fragmentColor += (yuv2rgb(textureCoordinate[1]) * convolutionMatrix[0][1]);
    fragmentColor += (yuv2rgb(textureCoordinate[2]) * convolutionMatrix[0][2]);
    fragmentColor += (yuv2rgb(textureCoordinate[3]) * convolutionMatrix[1][0]);
    fragmentColor += (yuv2rgb(textureCoordinate[4]) * convolutionMatrix[1][1]);
    fragmentColor += (yuv2rgb(textureCoordinate[5]) * convolutionMatrix[1][2]);
    fragmentColor += (yuv2rgb(textureCoordinate[6]) * convolutionMatrix[2][0]);
    fragmentColor += (yuv2rgb(textureCoordinate[7]) * convolutionMatrix[2][1]);
    fragmentColor += (yuv2rgb(textureCoordinate[8]) * convolutionMatrix[2][2]);
    gl_FragColor = vec4(fragmentColor, 1.0);
}

Part of the C++ code excerpt:

#ifndef GPU_GAUSSIANBLUR_FILTER_HPP
#define GPU_GAUSSIANBLUR_FILTER_HPP
#include "GpuBaseFilter.hpp"

class GpuGaussianBlurFilter : public GpuBaseFilter {
public:
    virtual int getTypeId() { return FILTER_TYPE_GAUSSIANBLUR; }

    GpuGaussianBlurFilter()
    {
        GAUSSIAN_BLUR_VERTEX_SHADER ="...";
        GAUSSIAN_BLUR_FRAGMENT_SHADER ="...";
    }

    void init() {
        GpuBaseFilter::init(GAUSSIAN_BLUR_VERTEX_SHADER.c_str(), GAUSSIAN_BLUR_FRAGMENT_SHADER.c_str());
        mWidthFactorLocation = glGetUniformLocation(getProgram(), "widthFactor");
        mHeightFactorLocation = glGetUniformLocation(getProgram(), "heightFactor");
        mSampleOffsetLocation = glGetUniformLocation(getProgram(), "offset");
        mUniformConvolutionMatrix = glGetUniformLocation(getProgram(), "convolutionMatrix");
        mSampleOffset = 0.0f;
        // 高斯滤波的卷积核
        convolutionKernel = new GLfloat[9]{
                0.0947416f, 0.118318f, 0.0947416f,
                0.118318f,  0.147761f, 0.118318f,
                0.0947416f, 0.118318f, 0.0947416f,
        };
    }
    void onOutputSizeChanged(int width, int height) {
        GpuBaseFilter::onOutputSizeChanged(width, height);
        glUniform1f(mWidthFactorLocation, 1.0f / width);
        glUniform1f(mHeightFactorLocation, 1.0f / height);
    }
    void setAdjustEffect(float percent) {
        mSampleOffset = range(percent * 100.0f, 0.0f, 3.0f);
        // 动态修正采样半径
    }

    void onDraw(GLuint SamplerY_texId, GLuint SamplerU_texId, GLuint SamplerV_texId,
                void* positionCords, void* textureCords)
    {
        if (!mIsInitialized)
            return;
        glUseProgram(mGLProgId);
        // 传递高斯滤波的卷积核
        glUniformMatrix3fv(mUniformConvolutionMatrix, 1, GL_FALSE, convolutionKernel);
        glUniform1f(mSampleOffsetLocation, mSampleOffset);
        glUniform1f(mWidthFactorLocation, 1.0f / mOutputWidth);
        glUniform1f(mHeightFactorLocation, 1.0f / mOutputHeight);

        glVertexAttribPointer(mGLAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, positionCords);
        glEnableVertexAttribArray(mGLAttribPosition);
        glVertexAttribPointer(mGLAttribTextureCoordinate, 2, GL_FLOAT, GL_FALSE, 0, textureCords);
        glEnableVertexAttribArray(mGLAttribTextureCoordinate);

        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, SamplerY_texId);
        glUniform1i(mGLUniformSampleY, 0);
        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, SamplerU_texId);
        glUniform1i(mGLUniformSampleU, 1);
        glActiveTexture(GL_TEXTURE2);
        glBindTexture(GL_TEXTURE_2D, SamplerV_texId);
        glUniform1i(mGLUniformSampleV, 2);
        // onDrawArraysPre
        glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
        glDisableVertexAttribArray(mGLAttribPosition);
        glDisableVertexAttribArray(mGLAttribTextureCoordinate);
        glBindTexture(GL_TEXTURE_2D, 0);
    }
};
#endif // GPU_GAUSSIANBLUR_FILTER_HPP

The above is the realization of Gaussian filtering. The video practice effect is converted into a gif with very low recognition, so I will not put it here. If you are interested, you can run the demo project to see the effect.

Project address: https://github.com/MrZhaozhirong/NativeCppApp  Mean Blur Filter cpp/gpufilter/filter/GpuGaussianBlurFilter.hpp

That is All.

Interest discussion group: 703531738. Code: Zhige 13567

 

 

Citation reference

Gaussian distribution algorithm   https://www.cnblogs.com/invisible2/p/9177018.html

Guess you like

Origin blog.csdn.net/a360940265a/article/details/107508059