Digital Image Processing - Face Detection Based on Matlab

One. Project Objectives

Feature extraction is a concept of computer vision and image processing. It refers to the use of a computer to extract image information, determines whether each image point of an image feature. Face recognition and feature recognition is a major application.

This project will design and implement face recognition algorithm based on Matlab, set the color of the human face training to get a set of facial features, and then identify the face from a new picture. Finally, the expansion will also test the effect in different situations, as well as the function to recognize a face or some other personal objects.

two. Principle Project

1. Feature extraction

The results of feature extraction is to point on the image is divided into different subsets, which subsets are often of isolated dots, a continuous curve or a continuous area. Feature extraction is a primary image processing operation, that is, it is a first arithmetic process on the image. It examines each pixel to determine whether the pixel represents a feature.

2. color histogram method

Since many computer image feature extraction algorithms using as a primary calculation step, and therefore a large number of feature extraction algorithms have been developed, to take the item color histogram based on color characteristics of the algorithm .

Color color histogram feature in many image retrieval systems are widely used. It describes that the ratio occupied by different colors in the entire image, but does not care which of the spatial position of each color, i.e., not described object in the image or object. Color histogram is particularly suitable for those described for automatic segmentation of the image difficult.

3. Color quantization meter

Operators need to color histogram color space is divided into several small color interval between each cell becomes a histogram bin . This process is called color quantization (Color Quantization) . Then, a color histogram can be obtained by calculating the number of pixels falling within the color between each cell. There are many color quantization method, vector quantization, for example, or a neural network clustering method. The most common practice is to color space of each component (dimension) divided evenly. In contrast, the clustering algorithm is taken into account in the distribution of the overall image color feature space, so that the number of pixels in a certain bin avoid very sparse, more efficient quantization.

Color quantization method described above will produce some problem. Image color histogram envisaged two almost identical, but offset from each other a bin, then if we use the Euclidean distance or the distance L1 calculating the similarity between the two, will be small similarity value. To overcome this drawback, the need to consider the degree of similarity between similar but not the same color. The second method is to use the distance formula. Another method is to advance the color histogram smoothing filter, i.e. the pixels in each bin for several adjacent bin also contribute. In this way, the similarity between similar but not the same color as the similarity of the histogram also contributed.

Select the appropriate color to color cells (i.e. histogram bin) color quantization method and a number of performance and efficiency requirements of the particular application. Generally, the greater the number of cells between the color histogram of the stronger ability to distinguish colors. However, bin color histogram of a large number will not only increase the computational burden, is not conducive to the establishment of the index in a large image library. And for some applications, the use of a very fine color space division method is not necessarily possible to improve the retrieval results, particularly those applications related images for tolerate mistakes.

4. Project Approach

For this study, 28 people under the Photo Gallery directory Faces face, as a test sample face training standards v. Then for all pixels of a color image, wherein calculating the frequency of occurrence of each color to obtain characteristic u.

Standard Training Face V : quantizing the image after the block, to establish have different lengths (i.e. number of color bits L) eigenvectors v, and L were taken to give 3,4,5,6 four different v. When taking 3/4/5/6 appropriate, means for selecting only the first uint8 each bit as characterized in 3/4/5/6. L and therefore, the greater the amount of information contained in each element v, and the smaller L, v and the greater the number corresponding to the color of each point. For example: L is v 3 in each element (i.e., the probability density of the colors) corresponding to the L as v 4, the same color as the front three RGB probability and density. 5 L for the same reason. Acquiring a corresponding color histogram of each block (i.e., distribution), the probability distribution of the color calculation, after training averaged 33 times, and finally get the standard V .

Get color feature U : input image and make image processing block (similar to JPEG coding). For each image block R, wherein u® calculated, and the calculated measure u® coefficient v of the block with the threshold decision whether or not a human face adjacent to the face and unified identification block (block surrounded by). The last frame with a more human face on the small square image, face recognition.

three. Design ideas

Design Algorithm steps are as follows:
1 v standard feature set of the training exercise;
2. the input image and make image processing block (similar to JPEG coding);
3. For each image block R, which is calculated u®;
. 4 . u® metric coefficient calculation of v, with the threshold decision whether the block is a human face;
5. adjacent face and unified identification block (block surrounded by);

four. Implementation

The following test results were achieved with the algorithm:

1. training feature standard

Faces in the training sample observation, each picture are basically the face, only a few other interference, so we can say that the training set of images consists only of a human face. We therefore defined functions feature_extract () to perform feature extraction on single images, script writing train.m implement training, and the resulting file into a standard feature v face_standard.mat in.

feature_extract () key code is as follows :

pic = double(reshape(pic,size(pic,1)*size(pic,2),1,3));
%新数组pic的维数大小为长*宽*3(有RGB三组值)

v = zeros(2^(3*L),1);%建立特征向量的数组,维数是2的3*L次幂
basic = 2^(8-L);
%初始化 

for i = 1:size(pic,1)    
    a = [pic(i,1,1),pic(i,1,2),pic(i,1,3)]; %取出像素,
    index = sum(floor(a/basic).*(2.^(2*L:-L:0)))+1;
    %计算该颜色对应的下标    
    v(index) = v(index) + 1;             %次数+1
end
v = v/size(pic,1);
%循环统计每个颜色出现的次数并求出频率

Flowchart follows:
Here Insert Picture Description
train.m key code is as follows :

for i = 1:num        
    v{L-2} = v{L-2} + feature_extract('Faces/',strcat(num2str(i),'.bmp'),L);    end

v{L-2} = v{L-2}/num;%进行num次提取特征训练,再取平均值end

%得到v数组

Flowchart is as follows:
Here Insert Picture Description
The results obtained in FIG v (L = 3):
Here Insert Picture Description

2. The input image processing and its block

Try to take block "long size square tiles are processed (16 <= block_l <= 50, changes according to different image), the image taking block progressive order taking,

But one thing, where if every last piece of line size less than block_l * block_l size, does not carry out completion, but direct access to the rest of the can. This block code is simple and similar to JPEG encoding in the block, not repeat them here. See face_detect () function.

3. Calculate the feature image block u

Coming to seek methods consistent with the positive features of a single image training standards. No fee later.

4. Calculate metric coefficients, thereby detecting

code show as below:

u = feature_extract(block,L);%获取该图像的特征u             
cor = sqrt(u)'*sqrt(v);%计算系数        
if cor >= threshold%如果大于阈值              
identify(i,j) = 1;%则是人脸            
rectangle('Position',...);%将该图像用红色小方框框出        
end  

Flowchart is as follows:
Here Insert Picture Description
However, here the threshold value is preferably set required before a better recognition results. For this purpose write a script to compare the performance of different threshold values (i.e., the effect of FIG show all), then pick the optimal threshold for the manual method. The effect is as follows:
Here Insert Picture Description

5. adjacent squares unity and identity

Just adjacent to the box where they enter and, separate small box to ignore. Ge can move together care code file face_detect.m modified as follows renderings (pretty much):
Here Insert Picture Description

6. Test

We then select a photo for testing:
Here Insert Picture Description
As can be seen, although the detected human face, but the box too much, people will also recognize faces and other skin out, this is due to the color histogram is based on color feature is something , while the exposed face of the body color and similar characteristics, it may also be identified.

There are two solutions, one is to recognize defined to identify the skin of the human body exposed, one is considering other methods such as color from the color sets, and so on. But in general case of L = 6 it seems to be somewhat better.

Fives. The extension study

1. Effect of post-conversion

The images are transformed as follows:
rotation (a) 90 degrees clockwise
(b) maintaining constant height, width 2 times the original drawing (imresize)
© surface 5 appropriately changed color (imadjust) then results obtained by the algorithm:

code show as below:

img = imread('test.jpg');
[y,num,identify] = face_detect(img,L,thresholds(L-2));
%原图 

img2 = rot90(img);
%将原图翻转90[y,num,identify] = face_detect(img2,L,thresholds(L-2)); 

img3 = imresize(img,[size(img,1) 2*size(img,2)],'nearest');
%将原图宽扩大2[y,num,identify] = face_detect(img3,L,thresholds(L-2)); 

img4 = imadjust(img,[.2 .3 0; .6 .7 1],[]);
%改变颜色对比度和颜色种类,即改变颜色[y,num,identify] = face_detect(img4,L,thresholds(L-2));

The results are as follows:
Here Insert Picture Description
As can be seen, image rotation, after changing the size, no effect for detection. But for the color to make changes, you can not detect a person's face, the reason is that the algorithm is based on the trained color, color and more if you can not effectively identify. Irrespective of rotating color, size change operation, the results are consistent.

2. Extension

By increasing the number of training set to achieve the recognition of a certain race or a Zhang face. You can also consider identifying an animal such as a cat, as well as some other object and identified, and are not limited recognition.

six. references

Color based feature extraction recognition https://blog.csdn.net/u012507022/article/details/51614851 https://ww2.mathworks.cn/discovery/face-recognition.html MATLAB recognition by computer vision https://blog.csdn.net/baidu_34971492/article/details/78713367 image feature extraction algorithm feature extractionhttps: //blog.csdn.net/sinat_39372048/article/details/81461636 "digital image processing" author: [United States ] Gonzalez Publisher: electronic industry Press

Published 17 original articles · won praise 12 · views 1668

Guess you like

Origin blog.csdn.net/weixin_42784535/article/details/104672675