Feature-based image matching algorithm, image similarity matching algorithm

What are the algorithms for calculating image similarity

SIM = Structural SIMilarity (structural similarity), which is a method used to evaluate image quality.

Since human vision can easily extract structural information from images, calculating the similarity of the structural information of two images can be used as a way to detect the quality of the image. First, the structural information should not be affected by lighting, so in the calculation The brightness information needs to be removed for the structural information, that is, the mean value of the image needs to be subtracted; secondly, the structural information should not be affected by the contrast of the image, so the variance of the image needs to be normalized when calculating the structural information; finally, we can obtain the structural information of the image Usually, we can simply calculate the correlation coefficient of the two processed images. However, the quality of the image is also restricted by the brightness information and contrast information, so when calculating the quality of the image, we should consider the structural information At the same time, the influence of the two needs to be considered. The commonly used calculation method is as follows, where C1, C2, and C3 are used to increase the stability of the calculation results: 2u(x)u(y) + C1L(X,Y) = -- ---------------------- , u(x), u(y) is the mean value of the image u(x)^2 + u(y)^2 + C1 2d(x)d(y) + C2C(X,Y) = ---------------------,d(x),d(y ) is the variance of the image d(x)^2 + d(y)^2 + C2 d(x,y) + C3S(X,Y) = ---------------- ------, d(x,y) is the covariance d(x)d(y) + C3 of the image x, y and the image quality Q = [L(X,Y)^a] x [C( X,Y)^b] x [S(X,Y)^c], where a, b, and c are used to control the importance of the three elements respectively. For the convenience of calculation, they can all be selected as 1, C1, C2, and C3 For relatively small values, usually C1=(K1 x L)^2, C2=(K2 xL)^2, C3 = C2/2, K1.

Google AI Writing Project: Neural Network Pseudo-Original

How to use the NCC algorithm in opencv to realize the similarity judgment of two images

Perceptual hash algorithm (perceptual hash algorithm), its role is to generate a "fingerprint" (fingerprint) string for each image, and then compare the fingerprints of different images . The closer the results, the more similar the images are.

Implementation steps: 1. Reduce size: reduce the image to 8*8 size, a total of 64 pixels.

The function of this step is to remove the details of the image, retain only the basic information such as structure/light and shade, and discard the image differences caused by different sizes/ratio; 2. Simplify the color: convert the reduced image to 64-level grayscale, that is, all There are only 64 colors in total for pixels; 3. Calculate the average value: calculate the average gray level of all 64 pixels; 4. Compare the gray level of pixels: compare the gray level of each pixel with the average value, and it is greater than or equal to The average value is recorded as 1, and less than the average value is recorded as 0; 5. Calculate the hash value: combine the comparison results in the previous step to form a 64-bit integer, which is the fingerprint of this image.

The order of combination is not important, as long as all images are in the same order; 6. After getting the fingerprint, you can compare different images to see how many bits are different in 64 bits.

In theory, this is equivalent to "Hamming distance" (Hamming distance, in information theory, the Hamming distance between two strings of equal length is the number of different characters in the corresponding positions of the two strings).

If the number of different data bits does not exceed 5, it means that the two images are very similar; if it is greater than 10, it means that these are two different images.

The above content is excerpted from: The following is the test code implemented with OpenCV: [cpp] view plaincopyprint?string strSrcImageName = ""; cv::Mat matSrc, matSrc1, matSrc2; matSrc = cv::imread(strSrcImageName, CV_LOAD_IMAGE_COLOR); CV_Assert(matSrc .channels() == 3); cv::resize(matSrc, matSrc1, cv::Size(357, 419), 0, 0, cv::INTER_NEAREST); //cv::flip(matSrc1, matSrc1, 1 ); cv::resize(matSrc, matSrc2, cv::Size(2177, 3233), 0, 0, cv::INTER_LANCZOS4); cv::Mat matDst1, matDst2; cv::resize(matSrc1, matDst1, cv: :Size(8, 8), 0, 0, cv::INTER_CUBIC); cv::resize(matSrc2, matDst2, cv::Size(8, 8), 0, 0, cv::INTER_CUBIC); cv:: cvtColor(matDst1, matDst1, CV_BGR2GRAY); cv::cvtColor(matDst2, matDst2, CV_BGR2GRAY); int iAvg1 = 0, iAvg2 = 0; int arr1[64],arr2[64]; for (int i = 0; i < 8; i++) {uchar* data1 = (i);uchar* data2 = (i);int tmp = i * 8;for (int j = 0; j < 8; j++) {int tmp1 = tmp + j;arr1[tmp1] = data1[j] / 4 * 4;arr2[tmp1] = data2[j] / 4 * 4;iAvg1 += arr1[tmp1];iAvg2 += arr2[tmp1];} } iAvg1 /= 64; iAvg2 /= 64; for (int i = 0; i < 64; i++) {arr1[i] = (arr1[i] >= iAvg1) ? 1 : 0;arr2[i] = (arr2[i] >= iAvg2) ? 1 : 0; } int iDiffNum = 0; for (int i = 0; i < 64; i++)if (arr1[i] != arr2[i])++iDiffNum; cout。

Hurry up hurry up! Seek matlab image to find binary image similarity, online and so on!

Images 1 and 2 are RGB images, placed in the m folder; if you directly input the binary value, you don’t need im2bw, pio is the similarity ratio I1=imread('1.jpg');I2=imread('2.jpg') ;I1_bw=im2bw(I1);%%binary I2_bw=im2bw(I2);[h,w]=size(I1_bw);%%get the width and height of the image h/wsum=0;for i=1:hfor j=1:w if I1_bw(i,j)==I2_bw(i,j)%% point-by-point comparison sum=sum+1; endendendpio=double(sum)/h/w;.

How to judge the similarity of two pictures in C#

It is very troublesome and the amount of calculation is very large. This belongs to the category of artificial intelligence. If these "two similar pictures" can stipulate many prerequisites, such as the same resolution, black and white, and simple geometric figures. . .

Then you can use the basic algorithm to calculate the "similarity", that is, as mentioned above, read the pixels of the two photos, and then traverse to compare the grayscale difference.

There are many ready-made algorithms for these, and many websites provide calculations in this area (you can directly call the API), but only digital "similarity" can be obtained. If what you want is the difference between the pixels of two pictures, then you can find an algorithm to achieve it.

Take a look at this website for reference: This is a relatively well-known image processing website abroad. But what if the two images are of different sizes? What if the ratio is different? What if there is a blank space? What about colored ones?

So the most mature programming algorithm at present is to recognize letters and numbers (for example, Google can recognize house numbers and street numbers on photos), and face recognition only takes a few specimen parts to roughly judge the similarity (eye size, nose bridge, etc.) The height of the cheek, the width and thinness of the cheek and the proportion), it is very difficult to completely compare whether the two pictures are the same with the standard of the human eye, and there should be no mature technology in this area.

C# grayscale image finds the picture with the highest similarity through the similarity algorithm

This kind of image search can use the perceptual hash algorithm. The first step is to reduce the image size and reduce the image size to 8x8, with a total of 64 pixels. The function of this step is to remove the differences in various image sizes and image ratios, and only keep Basic information such as structure, light and shade. The second step is to convert the reduced image into a 64-level gray-scale image. The third step is to calculate the average gray value of all pixels in the image. The fourth step is to calculate the average gray value of all pixels in the image. Compare the gray level of the pixel. Compare the gray level of each pixel with the average value. If it is greater than or equal to the average value, it is recorded as 1, and if it is less than the average value, it is recorded as 0. The fifth step calculates the hash value and combines the comparison results of the previous step. Together, they constitute a 64-bit binary integer, which is the fingerprint of this picture. The sixth step is to compare the fingerprints of the pictures. After getting the fingerprint of the picture, you can compare the fingerprints of different pictures and calculate how many bits are in the 64 bits. It is not the same. If the number of different data digits does not exceed 5, it means that the two pictures are very similar. If it is greater than 10, it means that they are two different pictures. The specific c# code can be found in using System; using ; using System .Drawing; namespace SimilarPhoto{ class SimilarPhoto { Image SourceImg; public SimilarPhoto(string filePath) { SourceImg = Image.FromFile(filePath); } public SimilarPhoto(Stream stream) { SourceImg = Image.FromStream(stream);        }         public String GetHash()        {            Image image = ReduceSize();            Byte[] grayValues = ReduceColor(image);            Byte average = CalcAverage(grayValues);            String reslut = ComputeBits(grayValues, average);            return reslut;        }         // Step 1 : Reduce size to 8*8        private Image ReduceSize(int width = 8, int height = 8)        {            Image image = SourceImg.GetThumbnailImage(width, height, () => { return false; }, );            return image;        }         // Step 2 : Reduce Color        private Byte[] ReduceColor(Image image)        {            Bitmap bitMap = new Bitmap(image);            Byte[] grayValues = new Byte[image.Width * image.Height];             for(int x = 0; x。Height];             for(int x = 0; x。Height];             for(int x = 0; x。GetThumbnailImage(width, height, () => { return false; }, );            return image;        }         // Step 2 : Reduce Color        private Byte[] ReduceColor(Image image)        {            Bitmap bitMap = new Bitmap(image);            Byte[] grayValues = new Byte[image.Width * image.Height];             for(int x = 0; x。GetThumbnailImage(width, height, () => { return false; }, );            return image;        }         // Step 2 : Reduce Color        private Byte[] ReduceColor(Image image)        {            Bitmap bitMap = new Bitmap(image);            Byte[] grayValues = new Byte[image.Width * image.Height];             for(int x = 0; x。

Baidu map recognition principle?

The principle of Baidu image recognition: For this kind of Baidu, Google's image search is generally implemented by algorithms, generally three steps: 1. Extract the features of the target image. There are many algorithms for describing images, and the most used one is: SIFT descriptor , fingerprint algorithm function, bundling features algorithm, hash function (hash function), etc.

Different algorithms can also be designed according to different images, such as the method of image local N-order moments to extract image features. 2. Encode the image feature information, and encode a large number of images as a lookup table.

For the target image, the image with a larger resolution can be down-sampled, and the image feature extraction and encoding process can be performed after reducing the amount of computation.

3. Similarity matching operation: use the coding value of the target image to perform global or local similarity calculation in the image database in the image search engine; set the threshold according to the required robustness, and then use the high similarity The pictures are reserved; in the end, there should be a step to filter the best matching pictures, which should still use the feature detection algorithm.

Each of these steps has a lot of algorithm research, research around mathematics, statistics, image coding, signal processing and other theories.

What does computer image processing SSD and SAD mean?

SSD, or Sum of Squared Differences, is the sum of the squares of the difference between the estimated value and the estimated object. Generally also known as Mean Squared Error.

SAD, the Sum of Absolute Differences, is the sum of the absolute values ​​of the differences. This algorithm is often used in image block matching, and the absolute value of the difference between the corresponding values ​​of each pixel is summed to evaluate the similarity of two image blocks.

It can be seen that this algorithm is very fast, but not accurate, and is usually used for preliminary screening of multi-stage processing.

 

Guess you like

Origin blog.csdn.net/aifamao6/article/details/127443753