OpenGL ES case-Douyin filter implementation

1. Zoom filter

1. Effect:

zoom filter.gif

2. Shader code

Here our scaling is implemented in the vertex shader. Of course, it can also be implemented in the fragment source shader, and we also recommend implementing it in the fragment source shader. Here we just want to show that it can also be implemented in the vertex shader. Did it.

1) Vertex shader

attribute vec4 Position;
attribute vec2 TextureCoords;
varying vec2 TextureCoordsVarying;
//时间戳(随着定时器的方法调用及时更新):从0开始一直递增
uniform float Time;
const float PI = 3.1415926;

void main(){
    //一次缩放效果的时长
    float duration = 0.6;
    //最大缩放幅度
    float maxAmplitude = 0.3;
    
    //表示传入的事件周期,即time的范围被控制在0.0~0.6
    //mod(a, b),求模运算 等价于 a%b,GLSL中不支持%求模
    float time = mod(Time,duration);
    
    //amplitude表示振幅,引入PI的目的是为了使用sin函数,将amplitude的范围控制在1.0 ~ 1.3之间,并随着时间变化
    //这里可以不用取绝对值,因为角度的范围是【0,π】,不会出现负数的情况
    float amplitude = 1.0 + maxAmplitude * abs(sin(time * (PI / duration)));
    
    //放大关键代码:将顶点坐标的x和y分别乘以一个放大系数,即振幅,在纹理坐标不变的情况下,就达到了拉伸的效果
    //xy放大,zw保持不变
    gl_Position = vec4(Position.x * amplitude, Position.y * amplitude, Position.zw);
    
    //纹理坐标传递给TextureCoordsVarying
    TextureCoordsVarying = TextureCoords;
}

The key point is thatfloat amplitude = 1.0 + maxAmplitude * abs(sin(time * (PI / duration)));
sin(time * (PI / duration)) is related tosin(time /duration*PI). Assuming the angleα = time /duration*PI, first calculate the proportion of the current time in the entire animation time. The proportion must be multiplied by π to get the corresponding angle, and finally the value of sin(α) is obtained; the result range of
sin(α) is [-1, 1], < a i=6> is to remove the absolute value and ensure that the obtained range is a positive number. abs()

2) Source shader

precision highp float;

uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;

void main (void) {
    vec4 mask = texture2D(Texture, TextureCoordsVarying);
    gl_FragColor = vec4(mask.rgb, 1.0);
}

2. Out-of-body filter

Soul Out of Body is actually a mixture of two textures. Because the same texture is used, we do not need to pass two textures. We only need to calculate the coordinates of the original texture and the enlarged texture at the same time to obtain the corresponding The texture can be mixed again.
The approximate process is:

During the animation time the texture is enlarged from 1.0 to 1.8 and the transparency is reduced from 0.4 to 0.0, and the above process is repeated.

1. Effect:

Soul out of body.gif

2. Shader code

1) Vertex shader

attribute vec4 Position;
attribute vec2 TextureCoords;
varying vec2 TextureCoordsVarying;

void main (void) {
    gl_Position = Position;
    TextureCoordsVarying = TextureCoords;
}

1) Source shader

precision highp float;

uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;
//当前时间
uniform float Time;

void main (void) {
    //动画总时间
    float duration = 0.7;
    //最大的透明度
    float maxAlpha = 0.4;
    //最大放大大小
    float maxScale = 1.8;
    //整个动画进度
    float progress = mod(Time, duration) / duration; // 0~1
    //根据进度算出透明度,因为透明度是从0.4开始递减的,所以是最大值乘以剩余进度
    float alpha = maxAlpha * (1.0 - progress);
    //根据进度算出放大大小,因为放大是【1,1.8】,所以是最大值减1乘以进度再加1.0
    float scale = 1.0 + (maxScale - 1.0) * progress;
    //获取放大后的纹理坐标X和Y
    float weakX = 0.5 + (TextureCoordsVarying.x - 0.5) / scale;
    float weakY = 0.5 + (TextureCoordsVarying.y - 0.5) / scale;
    vec2 weakTextureCoords = vec2(weakX, weakY);
    //获取放大纹理
    vec4 weakMask = texture2D(Texture, weakTextureCoords);
    //获取原纹理
    vec4 mask = texture2D(Texture, TextureCoordsVarying);
    //将原纹理和放大纹理进行混合
    gl_FragColor = mask * (1.0 - alpha) + weakMask * alpha;
}

  • To obtain the progress method, we use the modulo method:

The timestamp and duration are modulo), and then divided by the duration to get [0, 1], which is a percentage

float progress = mod(Time, duration) / duration; // 0~1
  • I put the amplification of the texture here in the source shader:

//获取放大后的纹理坐标X和Y
    float weakX = 0.5 + (TextureCoordsVarying.x - 0.5) / scale;
    float weakY = 0.5 + (TextureCoordsVarying.y - 0.5) / scale;
    vec2 weakTextureCoords = vec2(weakX, weakY);
  • As for the last line of code:

 gl_FragColor = mask * (1.0 - alpha) + weakMask * alpha;

The color mixing equation in OpenGL is actually used here. If you are interested, you can read my previous articleIt talks about color mixing.

3. Dither filter

The common result of the dither filters放大效果 and 颜色偏移. That is, during the process of zooming in, the color of the texture is shifted.

1. Effect:

Screen recording 2020-08-14 am 11.gif

2. Shader code:

The vertex shader does not need to be modified here. We can directly use the out-of-body code, so we only look at the code of the source shader here:

void main(){
    //一次抖动效果的时长
    float duration = 0.7;
    //放大图片的上限
    float maxScale = 1.1;
    //颜色偏移的步长
    float offset = 0.02;
    
    //进度 0 ~ 1
    float progress = mod(Time, duration) / duration;
    //颜色偏移值0 ~ 0.02
    vec2 offsetCoords = vec2(offset, offset) * progress;
    //缩放比例 1.0 ~ 1.1
    float scale = 1.0 + (maxScale - 1.0) * progress;
    
    //放大后的纹理坐标 
    //下面这种向量相加减的方式 等价于 灵魂出窍滤镜中的单个计算x、y坐标再组合的为纹理坐标的方式
    vec2 ScaleTextureCoords = vec2(0.5, 0.5) + (TextureCoordsVarying - vec2(0.5, 0.5)) / scale;
    
    //获取三组颜色:颜色偏移计算可以随意,只要偏移量很小即可
    //原始颜色 + offset
    vec4 maskR = texture2D(Texture, ScaleTextureCoords + offsetCoords);
    //原始颜色 - offset
    vec4 maskB = texture2D(Texture, ScaleTextureCoords - offsetCoords);
    //原始颜色
    vec4 mask = texture2D(Texture, ScaleTextureCoords);

    //从3组颜色中分别取出 红色R,绿色G,蓝色B,透明度A填充到内置变量gl_FragColor内
    gl_FragColor = vec4(maskR.r, maskB.g, mask.b, mask.a);
}

4. Flash white filter

The flash white filter adds a white layer to the image texture. The transparency of the layer first increases from 0 to 1, then decreases from 1 to 0, and the process is repeated.

1. Effect

flash white filter.gif

2. Shader code

Similarly, we only need to modify the fragment shader code.

precision highp float;

uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;

uniform float Time;

const float PI = 3.1415926;

void main (void) {
    float duration = 0.6;
    
    float time = mod(Time, duration);
    //白色图层
    vec4 whiteMask = vec4(1.0, 1.0, 1.0, 1.0);
    //计算进度
    float amplitude = abs(sin(time * (PI / duration)));
    
    vec4 mask = texture2D(Texture, TextureCoordsVarying);
    
    gl_FragColor = mask * (1.0 - amplitude) + whiteMask * amplitude;
}

The transparency of the white layer here first increases from 0 to 1, and then decreases from 1 to 0, so we use the calculation method of sin for progress calculation.

5. Glitch filter

The glitch filter is actually image tearing plus a color shift. Picture tearing can only tear a small part, otherwise the picture may not be distinguishable.
We let each row of pixels be randomly offset by a distance of -1 ~ 1 (-1 ~ 1 here is for texture coordinates), and because only a small part of it is torn , so our logic is to set a threshold. If it is less than this threshold, it will be offset. If it exceeds this threshold, it will be multiplied by a reduction factor. Ensure the normal display of images.
So the final effect is: most of the rows will have a slight offset, and only a small number of rows will have a larger offset.

1. Effect:

glitch filter.gif

2. Shader code:

precision highp float;
uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;
//时间戳
uniform float Time;
//PI常量
const float PI = 3.1415926;
//随机数
float rand(float n){
    //fract(x)返回x的小数部分
    //返回 sin(n) * 43758.5453123
    //sin(n) * 极大值,带小数点,想要随机数算的比较低,乘的数就必须较大,噪声随机
    //如果想得到【0,1】范围的小数值,可以将sin * 1
    //如果只保留小数部分,乘以一个极大值
    return fract(sin(n) * 43758.5453123);
}

void main(){
    //最大抖动上限
    float maxJitter = 0.06;
    //一次毛刺效果的时长
    float duration = 0.3;
    //红色颜色偏移
    float colorROffset = 0.01;
    //绿色颜色偏移
    float colorBOffset = -0.025;
    
    //表示将传入的事件转换到一个周期内,范围是 0 ~ 0.6,抖动时长变成0.6
    float time = mod(Time, duration * 2.0);
    //振幅,随着时间变化,范围是[0, 1]                                                                             
    float amplitude = max(sin(time * (PI / duration)), 0.0);
    
    //像素随机偏移范围 -1 ~ 1,* 2.0 - 1.0是为了得到【-1,1】范围内的随机值
    float jitter = rand(TextureCoordsVarying.y) * 2.0 - 1.0;
    //判断是否需要偏移,如果jitter范围 < 最大范围*振幅
    // abs(jitter) 范围【0,1】
    // maxJitter * amplitude 范围【0, 0.06】
    bool needOffset = abs(jitter) < maxJitter * amplitude;
    
    //获取纹理x坐标,根据needOffset来计算它的x撕裂
    //needOffset = YES,则撕裂大
    //needOffset = NO,则撕裂小,需要降低撕裂 = *振幅*非常细微的数
    float textureX = TextureCoordsVarying.x + (needOffset ? jitter : (jitter * amplitude * 0.006));
    //获取纹理撕裂后的x、y坐标
    vec2 textureCoords = vec2(textureX, TextureCoordsVarying.y);
    
    //颜色偏移:获取3组颜色
    //撕裂后的原图颜色
    vec4 mask = texture2D(Texture, textureCoords);
    //根据撕裂计算后的纹理坐标,获取纹素
    vec4 maskR = texture2D(Texture, textureCoords + vec2(colorROffset * amplitude, 0.0));
    //根据撕裂计算后的纹理坐标,获取纹素
    vec4 maskB = texture2D(Texture, textureCoords + vec2(colorBOffset * amplitude, 0.0));
    
    //颜色主要撕裂,红色和蓝色部分,所以只保留绿色
    gl_FragColor = vec4(maskR.r, mask.g, maskB.b, mask.a);
}

Here we define a function:float rand(float n). In the shader file, we can not only use the main function, but also customize other functions. To help us deal with some logic and ensure that the main code is fresh and clean enough.

6. Illusion filter

The hallucination filter is actually an afterimage combined with a color shift.

  • Afterimage: Every once in a while, a new layer is created, and the layer is mainly red. The transparency gradually decreases as time goes by, so you can see many layers of different transparency superimposed on each other within a cycle. , thus forming an afterimage, causing the picture to move in a circular motion over time.
  • Color shift: When the picture is moving, blue is in front and red is in the back. That is, during the movement, at intervals, a part of the red channel value is lost at the original position, and the value of this part of the red channel is lost. The value will gradually recover as time shifts.

1. Effect:

Illusion filter.gif

2. Code:

precision highp float;
//纹理采样器
uniform sampler2D Texture;
//纹理坐标
varying vec2 TextureCoordsVarying;
//时间戳
uniform float Time;
//PI 常量
const float PI = 3.1415926;
//⼀次幻觉滤镜的时⻓
const float duration = 2.0;

//这个函数可以计算出,在某个时刻图⽚的具体位置。通过它我们可以每经过⼀段时间,去⽣成⼀个新的 层
vec4 getMask(float time, vec2 textureCoords, float padding)
{
    //圆周坐标
    vec2 translation = vec2(sin(time * (PI * 2.0 / duration)), cos(time * (PI * 2.0 / duration)));
    //纹理坐标 = 纹理坐标+偏离量 * 圆周坐标
    vec2 translationTextureCoords = textureCoords + padding * translation;
    //根据这个坐标获取新图层的坐标
    vec4 mask = texture2D(Texture, translationTextureCoords); return mask;
}
//这个函数可以计算出,某个时刻创建的层,在当前时刻的透明度。
float maskAlphaProgress(float currentTime, float hideTime, float startTime) {
    //duration+currentTime-startTime % duration
    float time = mod(duration + currentTime - startTime, duration);
    return min(time, hideTime);
}
void main (void) {
    //表示将传⼊的时间转换到⼀个周期内,即 time 的范围是 0 ~ 2.0
    float time = mod(Time, duration);
    //放⼤倍数
    float scale = 1.2;
    //偏移量
    float padding = 0.5 * (1.0 - 1.0 / scale);
    //放⼤后的纹理坐标
    vec2 textureCoords = vec2(0.5, 0.5) + (TextureCoordsVarying - vec2(0.5, 0.5)) / scale;
    //隐藏时间
    float hideTime = 0.9;
    //时间间隔 8
    float timeGap = 0.2;
    //注意: 只保留了红⾊的透明的通道值,因为幻觉效果残留红⾊.
    //新图层的-R⾊透明度 0.5
    float maxAlphaR = 0.5;
    // max R
    //新图层的-G⾊透明度 0.05
    float maxAlphaG = 0.05;
    // max G
    //新图层的-B⾊透明度 0.05
    float maxAlphaB = 0.05;
    // max B
    //获得新的图层坐标!
    vec4 mask = getMask(time, textureCoords, padding);
    float alphaR = 1.0;
    // R
    float alphaG = 1.0;
    // G
    float alphaB = 1.0;
    // B
    //最终图层颜⾊
    vec4 resultMask = vec4(0, 0, 0, 0);
    //循环
    for (float f = 0.0; f < duration; f += timeGap)
    {
        float tmpTime = f;
        //获取到0-2.0秒内所获取的运动后的纹理坐标
        vec4 tmpMask = getMask(tmpTime, textureCoords, padding);
        //某个时刻创建的层,在当前时刻的红绿蓝的透明度
        float tmpAlphaR = maxAlphaR - maxAlphaR * maskAlphaProgress(time, hideTime, tmpTime) / hideTime;
        float tmpAlphaG = maxAlphaG - maxAlphaG * maskAlphaProgress(time, hideTime, tmpTime) / hideTime;
        float tmpAlphaB = maxAlphaB - maxAlphaB * maskAlphaProgress(time, hideTime, tmpTime) / hideTime;
        //累积每⼀层每个通道乘以透明度颜⾊通道
        resultMask += vec4(tmpMask.r * tmpAlphaR, tmpMask.g * tmpAlphaG, tmpMask.b * tmpAlphaB, 1.0);
        //透明度递减 9
        alphaR -= tmpAlphaR; alphaG -= tmpAlphaG; alphaB -= tmpAlphaB;
    }
    //最终颜⾊ += 红绿蓝 * 透明度
    resultMask += vec4(mask.r * alphaR, mask.g * alphaG, mask.b * alphaB, 1.0);
    //将最终颜⾊填充到像素点⾥.
    gl_FragColor = resultMask;
    
}

Here we also define two functions like the glitch filter to help us complete the calculation:

  • vec4 getMask(float time, vec2 textureCoords, float padding): This function can calculate the specific position of the picture at a certain moment. Through it, we can generate a new layer every time a period of time passes.
  • float maskAlphaProgress(float currentTime, float hideTime, float startTime): This function can calculate the transparency of the layer created at a certain moment at the current moment.

The previous demo effects were all done in the simulator, including today's first five filter effects, but the sixth one really can't run anymore. Maybe my computer is getting old! Haha, it’s actually because the simulator relies on the CPU to simulate. The previous effects were relatively simple. It’s still possible to let the CPU replace the GPU to complete the process. However, there are too many layers in the illusion filter, and the calculation amount increases a lot. The CPU I can no longer resist, so I can only use a real machine and let the GPU do what it should do. In other words, whoever does the job is the best!

Acho que você gosta

Origin blog.csdn.net/qq_21743659/article/details/121637200#comments_30339203
Recomendado
Clasificación