Opencv study notes (20) cvFilter2D() convolution and processing of convolution boundaries

20.1**cvFilter2D()Convolution:**

void cvFilter2D(
const CvArr* src,
CvArr* dst,
const CvMat* kernel,
CvPoint anchor=cvPoint(-1,-1)
);
src
input image

dst
output image

kernel
convolution kernel, single channel floating point matrix. If you want to apply different kernels to different channels, first use the cvSplit function to split the image into individual color channels and then process them separately.

The anchor point of the anchor
kernel represents the position of a filtered point within the kernel. The anchor point should be inside the nucleus. The default value (-1,-1) means the anchor is at the center of the core.

Here we create an appropriately sized matrix and pass the coefficients to cvFilter2D() along with the source and destination images. We can also optionally input a CvPoint to point out the center of the kernel. The default value (cvPoint (-1, -1)) will set the reference point to be the center of the kernel. The size of the kernel can be even if a reference point is defined, otherwise it can only be odd.
The source image src and the destination image dst should be the same size, some people may think that the source image src should be larger than the destination image dst considering the extra length and width of the convolution kernel. But in OpenCV, the size of the source image src and the target image dst can be the same, because by default, before convolution, OpenCV creates virtual pixels by copying the border of the source image src, so that the border of the target image dst can be easily Pixels can be padded.
The coefficients of the convolution kernel we are talking about here should be of floating point type, which means we have to initialize the matrix with CV_32F.

20.2 Convolution templates and program examples:

The following article introduces the application of convolution in detail:
http://blog.sina.com.cn/s/blog_6ac784290101e47s.html
Common convolution templates
write picture description here

20.3 Program code for OpenCV image convolution (image filtering):

#include "cv.h"
#include "highgui.h"
int main(int argc,char**argv) 
{ 
    IplImage* src, *dst;   
    float low[9] ={ 1.0/16, 2.0/16, 1.0/16, 2.0/16, 4.0/16, 2.0/16, 1.0/16, 2.0/16, 1.0/16 };//低通滤波核
    float high[9]={-1,-1,-1,-1,9,-1,-1,-1,-1};
    //高通滤波核——这个模板自己设,这里用的是常用的核。
    CvMat km = cvMat( 3, 3, CV_32FC1, low);  
    //构造单通道浮点矩阵,将图像IplImage结构转换为图像数组
    //上面这个矩阵就是在cvFilter2D里面要用到的核
    src = cvLoadImage( "b.jpg" ); 
    dst = cvCreateImage( cvGetSize(src), IPL_DEPTH_8U, 3 );
    cvFilter2D( src, dst, &km, cvPoint( -1, -1 ) );  
    //设参考点为核的中心
    cvNamedWindow( "src", CV_WINDOW_AUTOSIZE );
    cvNamedWindow( "filtering", CV_WINDOW_AUTOSIZE );
    cvShowImage( "src", src );  
    cvShowImage( "filtering", dst ); 
    cvWaitKey(0); 
    cvReleaseImage( &src ); 
    cvReleaseImage( &dst ); 
    return 0; 
} 

20.4 Low-pass filtering and high-pass filtering:

Low-pass filtering: Edge smoothing
High- pass filtering: Edge extraction and enhancement

Filtering is a basic operation in image processing by signal processors. Filtering removes noise from an image, extracts features of interest, and allows image resampling.
Frequency domain and spatial domain in images: The spatial domain refers to describing an image by the gray value of the image; while the frequency domain refers to describing an image by the change of the gray value of the image. The concepts of low-pass filter and high-pass filter are generated in the frequency domain.
A low-pass filter refers to removing high-frequency components in an image, while a high-pass filter refers to removing low-frequency components in an image.

20.5 Convolutional Boundaries

For convolutions, a natural question is how to deal with convolution boundaries. For example, when using the convolution kernel just discussed, what happens when the convolution points are at the image boundaries? Many OpenCV built-in functions that use cvFilter2D() have to solve this problem in various ways. Also when doing convolution, it is necessary to know how to effectively solve this problem.
The solution is to use the cvCopyMakeBorder() function, which makes a particular image slightly larger and then automatically fills the image borders in various ways.
Duplicate the image and make the border.

20.5.1 cvCopyMakeBorder()
Definition:
void cvCopyMakeBorder(
const CvArr* src,
CvArr* dst,
CvPoint offset,
int bordertype,
CvScalar value=cvScalarAll(0)
);
Parameters:
src
input image.

dst
output image. (resize accordingly according to the offset)

offset
The coordinates of the upper left corner (or the coordinates of the lower left corner, if the lower left corner is the origin) of the output image rectangle to which the input image (or its ROI) is to be copied. The size of the rectangle should match one-half of the ROI of the size of the original image.
——Refers to which point on the output image is specified as the origin coordinate, and the image is copied. In the example, cvPoint(5,5) and cvPoint(25,25) are selected as the origin of the image respectively, then the output image should be correspondingly enlarged by cvSize(img->width+10,img->height+10) , cvSize(img->width+50,img->height+50).
——The output image here must at least add the origin coordinate value to the border, of course, it is best to double it (otherwise only two sides will have borders). So it's +10 and 50.

bordertype The type of the border of the copied original image rectangle:
IPL_BORDER_CONSTANT - the fill border is a fixed value, and the value is specified by the last parameter of the function. (default black fill)
IPL_BORDER_REPLICATE - The border is filled with upper and lower rows or left and right columns. (The other two IPL boundary types, IPL_BORDER_REFLECT and IPL_BORDER_WRAP are now not supported).

value If the border type is IPL_BORDER_CONSTANT, then this is the value of the border pixel.

As we mentioned earlier, when calling the convolution function in the OpenCV library function, the cvCopyMakeBorder() function will be called. In most cases, the boundary type is
IPL_BORDER_REPLICATE, but sometimes it is not desirable. So in another case, cvCopyMakeBorder() may be used. You can create an image with slightly larger bounds than you want, call any normal operations, and then crop to the part of the source image that interests you. This way, OpenCV's automatic bordering doesn't affect the pixels of interest.

20.5.2 Program Examples

#include "cv.h"
#include "cxcore.h"
#include "highgui.h"
#include <iostream>
int main(int argc,char** argv)
{
    IplImage* src=cvLoadImage("b.jpg");
    IplImage* dst1=cvCreateImage(cvSize(src->width+40,src->height+40),src->depth,src->nChannels);
    //因为cvPoint(20,20),所以长宽必须是+(20*2=)40
    IplImage* dst2=cvCreateImage(cvSize(src->width+40,src->height+40),src->depth,src->nChannels);
    cvZero(dst1);
    cvZero(dst2);
    cvCopyMakeBorder(src,dst1,cvPoint(20,20),IPL_BORDER_REPLICATE); //用边界像素的值填充
    cvCopyMakeBorder(src,dst2,cvPoint(20,20),IPL_BORDER_CONSTANT); //用黑色填充
    cvNamedWindow("dst1");
    cvNamedWindow("dst2");
    cvShowImage("src",src);
    cvShowImage("dst1",dst1);
    cvShowImage("dst2",dst2);
    cvWaitKey(0);
    cvDestroyWindow("src");
    cvDestroyWindow("dst1");
    cvDestroyWindow("dst2");
    cvReleaseImage(&src);
    cvReleaseImage(&dst1);
    cvReleaseImage(&dst2);
    return 0;
}

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324818340&siteId=291194637