Implementation code based on several camera filter algorithm and the GLSL OpenGL ES

        Based on open source projects nekocode / CameraFilter (Project Address: https://github.com/nekocode/CameraFilter , interested readers can download their own to the project code that runs on the Android system), to explain some of the filters and the project has been implemented I combine some existing algorithms filter function implemented on the project. Item code itself is relatively simple, is based on the OpenGL ES implementation, we can use the code for this project to quickly implement algorithms of their own design, not limited to filters, here we generally call filters.

       Filter, the core algorithm of the present project are basically implemented in fragment shader (fragment shader) implementation. Here is the main explanation corresponding fragment shader code. FIG brief description of the input and output parameters according to fragment shader.

As input and output parameters are denoted as referred to herein, the input parameters for the fragment shader and texCoord iChannel0, respectively tile coordinates and texture data; output parameter gl_FragColor, the value of the color coordinates after calculation texCoord. Master clear here, particularly important for the understanding of this article.

  • triple

void main() {
  if (texCoord.y <=  0.333){
       gl_FragColor = texture2D(iChannel0, vec2(texCoord.x,texCoord.y + 0.333));
    }else if (texCoord.y > 0.333 && texCoord.y<= 0.666){
       gl_FragColor = texture2D(iChannel0, texCoord);
    }else{
       gl_FragColor = texture2D(iChannel0, vec2(texCoord.x,texCoord.y - 0.333));
    }
 }
  • Relief

const highp vec3 transMatrix = vec3(0.2125, 0.7154, 0.0721);
const vec4 bgColor = vec4(0.5, 0.5, 0.5, 1.0);
void main() {
 	  vec2 currentUV = texCoord;
      vec2 preUV = vec2(currentUV.x-5.0/iResolution.x, currentUV.y-5.0/iResolution.y);
      vec4 currentMask = texture2D(iChannel0, currentUV);
      vec4 preMask = texture2D(iChannel0, preUV);
      vec4 delColor = currentMask - preMask;
      float luminance = dot(delColor.rgb, transMatrix);
      gl_FragColor = vec4(vec3(luminance), 0.0) + bgColor;
 }
  • Whirlpool (Tai Chi diagram)

const float PI = 3.14159265;
const float rotateRadian = PI/3.0;
const float radiusRatio = 0.8;
const float center = 0.5;
void main() {
    float radius = min(iResolution.x,iResolution.y)*radiusRatio/2.0;
    vec2 texCoord = texCoord;
    vec2 currentUV = texCoord;
    currentUV.x *= iResolution.x;
    currentUV.y *= iResolution.y;
    vec2 centerUV = iResolution.xy * center;
    vec2 deltaUV = currentUV - centerUV;
    float deltaR = length(deltaUV);
    float beta = atan(deltaUV.y, deltaUV.x) + rotateRadian * 2.0 * (-(deltaR/radius)*(deltaR/radius) + 1.0);
    vec2 dstUV = currentUV;
    if(deltaR <= radius){
        dstUV = centerUV + deltaR*vec2(cos(beta), sin(beta));
    }
    dstUV.x /=iResolution.x;
    dstUV.y /=iResolution.y;
    gl_FragColor = texture2D(iChannel0, dstUV);
 }
  • Casting

void main() {
 	 vec4 mask = texture2D(iChannel0, texCoord);
    vec4 tempColor = vec4(mask.r*0.5/(mask.g + mask.b + 0.01),
     mask.g*0.5/(mask.r + mask.b + 0.01),
     mask.b*0.5/(mask.r + mask.g + 0.01),
       1.0);
    gl_FragColor = tempColor;
 }
  • Edge detection

void main() {
    vec4 color =  texture2D(iChannel0, texCoord);
    float gray = length(color.rgb);
    gl_FragColor = vec4(vec3(step(0.06, length(vec2(dFdx(gray), dFdy(gray))))), 1.0);
}
  • Wave

void main() {
	float stongth = 0.3;
	vec2 uv = texCoord.xy;
	float waveu = sin((uv.y + iGlobalTime) * 20.0) * 0.5 * 0.05 * stongth;
	gl_FragColor = texture2D(iChannel0, uv + vec2(waveu, 0));
}

With the continuous development of technology, various sensors are integrated into computers, mobile phones, robots and other equipment, a combination of physical information and image recognition algorithms perception of these sensors, we can achieve more powerful and real-time effects.

For example, in recent years, more fire small video applications vibrato or deft, there are hundreds of special effects. The following diagram is several vibrato effects, requires a combination of face recognition or measurement of depth of field, implementation principle of replacing the remaining pixels should I have said today is similar. These effects, if it is applied to a static camera, it should be quite easy; its difficulty lies in this dynamic video scene, still be able to achieve good real-time, no obvious effects of delay or other imperfections and lower power consumption, optimized performance is very good.

In short, understand and grasp the basic principle and method of image processing is still very important, based on these principles and methods can create a kaleidoscopic effect.

note:

(1) In this example and the color coordinate values ​​are normalized (Normalized) in the range both [0,1].

(2) from the viewpoint of signal processing, image processing is divided into spatial domain processing and frequency domain processing. This article covered the space domain algorithm army.

 

Reference material

https://www.jianshu.com/p/a771639ffbbb

https://github.com/nekocode/CameraFilter

Released four original articles · won praise 9 · views 5956

Guess you like

Origin blog.csdn.net/wudexiaoade2008/article/details/105105364
Recommended