Use the OpenCV toolkit to successfully implement face detection and face recognition, including traditional vision and deep learning methods (with complete code, hematemesis...)

OpenCV implements face detection

To realize the face recognition function, face detection must be performed first, and the position of the face in the picture must be judged before the next step can be performed.

Reference link:
1. OpenCV face detection
2. [OpenCV-Python] 32. OpenCV face detection and recognition - face detection
3. [youcans image processing learning course] 23. Face detection: Haar cascade detection 4. OpenCV actual combat 5: LBP cascade classifier realizes face detection 5. Computer vision
OpenCv learning series: Part 10, real-time face detection

OpenCV face detection method

In OpenCV, two features (ie, two methods) are mainly used for face detection, Haar feature and LBP feature. The most used is Haar feature face detection. In addition, OpenCV also integrates deep learning methods to realize face detection.

Face Detection Based on Haar Feature

Haar cascade detector pre-training model download

In OpenCV, use the classifier that has been trained in XML format for face detection. In the data folder in the sources folder under the OpenCV installation directory or download the opencv source code on github, you can find the model file in the data folder of the source code:
https://github.com/opencv/opencv /tree/4.x/data

haarcascade_eye.xml, eyes
haarcascade_eye_tree_eyeglasses.xml, eyes with glasses
haarcascade_frontalcatface.xml, front cat face
haarcascade_frontalcatface_extended.xml, front cat face
haarcascade_frontalface_alt.xml, front face
haarcascade_frontalface_alt2.xml, front face
haar cascade_frontalface_alt_tree.xml, frontal face
haarcascade_frontalface_default. xml, front face
haarcascade_fullbody.xml, human body
haarcascade_lefteye_2splits.xml, left eye
haarcascade_license_plate_rus_16stages.xml,
haarcascade_lowerbody.xml ,
haarcascade_profileface.xml, haarcascade_righteye_2splits.xml
, right eye
haarcascade_russ ian_plate_number.xml,
haarcascade_smile.xml, smiley face
haarcascade_upperbody.xml, upper body

Haar cascade classifier

The cascade classifier based on Haar features is an object detection method proposed by Paul Viola in the paper "Rapid Object Detection using a Boosted Cascade of Simple Features".

The Haar cascade classifier uses the AdaBoost algorithm to learn a multi-layer classifier with high detection rate and low rejection rate in each level of nodes. Its characteristics are:

  • Threshold the sum or difference of rectangular image regions using Haar-like input features.
  • Use the integral image to calculate the pixel sum of the 45° rotated area to speed up the calculation of Haar-like input features.
  • Use Statistical Boosting to create binary (face/non-face) classifier nodes (high pass rate, low reject rate).
  • The weak classifiers are combined in parallel to form a screening cascade classifier.
    Haar cascade classifier

Boosting classifiers at all levels can pass the detection windows of human faces, and at the same time reject a small part of detection windows of non-faces, and pass the detection windows that pass to the next classifier. By analogy, the last classifier rejects almost all non-face detection windows, leaving only human face detection windows. Therefore, as long as the detection window area passes all levels of Boosting classifiers, it is considered that there is a human face in the detection window.

In practical applications, the size of the input image is large, and multi-region and multi-scale detection is required. Multi-region is to traverse different positions of the picture, and multi-scale is to detect faces of different sizes in the picture.
In the Haar cascade classification face detector, the structural features of the face are mainly used:
1) Compared with the cheeks, the eye color is darker
2) Compared with the eyes, the nose bridge area is brighter
3) Eyes, mouth, The position of the nose is relatively fixed.
Through the light-dark relationship of these five rectangular areas, the discriminant features for each part of the face can be formed. For example in the image below, the first feature detects the difference in intensity between the eyes and the upper cheek, and the second feature detects the distance between the eyes.
insert image description here
Haar face detection has a high recognition rate for frontal face detection, but the detection performance for side faces is poor.

OpenCV-Python implementation

Steps to detect faces in pictures using Haar cascade detector:

(1) Create a CascadeClassifier cascade classifier object and load the cascade classifier model from the .xml file.
(2) Read the picture to be detected.
(3) Use the detectMultiScale() method to detect the picture and return the bounding rectangle of the detected face or eye.
(4) Draw the detected bounding rectangle on the detection picture.
The cascade classifier class cv::CascadeClassifier is defined in OpenCV. In the Python language, use the interface function cv2.CascadeClassifier() to create a classifier from a file. The member function cv.CascadeClassifier.detectMultiScale() is used to perform target detection on images.

import cv2
cv2.CascadeClassifier.detectMultiScale(image[, scaleFactor=1.1, minNeighbors=3, flags=0, minSize=Size(), maxSize=Size()]) → objects

Parameter Description:

  • filename: The file path and name of the loaded classifier model, string. The loaded cascade classifier model file, with the extension .xml.
  • image: The input image to be detected, in CV_8U format.
  • scaleFactor: The scaling factor of the search window, the default value is 1.1.
  • minNeighbors: Indicates the minimum number of adjacent rectangles that constitute the detection target, and the default value is 3.
  • flags: version compatibility flag, the default value is 0.
  • minSize: The minimum size of the detection target, tuple (h, w).
  • maxSize: The maximum size of the detection target, tuple (h, w).

return value

  • objects: The return value, the rectangular bounding box of the detected target, is a Numpy array shaped like (N,4). There are 4 elements (x, y, width, height) in each line, which represent the upper left vertex coordinates (x, y) and width and height of the rectangular box.

Use a Haar cascade detector to detect faces in an image:

import numpy as np
import cv2 as cv

if __name__ == '__main__':
    # (6) 使用 Haar 级联分类器 预训练模型 检测人脸
    # 读取待检测的图片
    img = cv.imread("../data/single.jpg")
    print(img.shape)

    # 加载 Haar 级联分类器 预训练模型
    model_path = "../data/haarcascade_frontalface_alt2.xml"
    face_detector = cv.CascadeClassifier(model_path)  # <class 'cv2.CascadeClassifier'>
    # 使用级联分类器检测人脸
    faces = face_detector.detectMultiScale(img, scaleFactor=1.1, minNeighbors=1,
                                           minSize=(30, 30), maxSize=(300, 300))
    print(faces.shape)  # (17, 4)
    print(faces[0])  # (x, y, width, height)

    # 绘制人脸检测框
    for x, y, width, height in faces:
        cv.rectangle(img, (x, y), (x + width, y + height), (0, 0, 255), 2, cv.LINE_8, 0)
    # 显示图片
    cv.imshow("faces", img)
    cv.waitKey(0)
    cv.destroyAllWindows()

single face detection
Use a Haar cascade detector to detect human eyes in an image:
The method of human eye detection is the same as that of face detection, except that a different pre-trained model is used, such as haarcascade_eye.xml.
Since the eyes are smaller than the size of the face, the parameter minSize=(20, 20) of the detection function detectMultiScale() is reduced.
In addition, scaleFactor and minNeighbors will also affect the number of human eyes detected.

import cv2 as cv

if __name__ == '__main__':
    # (7) 使用 Haar 级联分类器 预训练模型 检测人眼
    # 读取待检测的图片
    img = cv.imread("./data/single.jpg")
    print(img.shape)

    # 加载 Haar 级联分类器 预训练模型
    model_path = "./data/haarcascade_eye.xml"
    eye_detector = cv.CascadeClassifier(model_path)  # <class 'cv2.CascadeClassifier'>
    # 使用级联分类器检测人脸
    eyes = eye_detector.detectMultiScale(img, scaleFactor=1.1, minNeighbors=10,
                                           minSize=(10, 10), maxSize=(80, 80))
    # 绘制人脸检测框
    for x, y, width, height in eyes:
        cv.rectangle(img, (x, y), (x + width, y + height), (0, 0, 255), 2, cv.LINE_8, 0)
    # 显示图片
    cv.imshow("Haar_Cascade", img)
    # cv.imwrite("../images/imgSave3.png", img)
    cv.waitKey(0)
    cv.destroyAllWindows()

Single face and eye detection
scaleFactor=1.1, minNeighbors=10
Single face and eye detection
scaleFactor=1.1, minNeighbors=5

Simultaneous detection of faces and eyes using a Haar cascade detector
In order to improve the detection efficiency, the human face can be detected first, and then the human eyes can be detected in the human window, which can not only improve the detection efficiency, but also improve the detection accuracy.

import cv2 as cv

if __name__ == '__main__':
    # (8) 使用 Haar 级联分类器 预训练模型 检测人脸和人眼
    # 读取待检测的图片
    img = cv.imread("./data/multiface1.jpeg")
    print(img.shape)

    # 加载 Haar 级联分类器 预训练模型
    face_path = "./data/haarcascade_frontalface_alt2.xml"  # 人脸检测器
    face_detector = cv.CascadeClassifier(face_path)  # <class 'cv2.CascadeClassifier'>
    eye_path = "./data/haarcascade_eye.xml"  # 人眼检测器
    eye_detector = cv.CascadeClassifier(eye_path)  # <class 'cv2.CascadeClassifier'>
    # 使用级联分类器检测人脸
    faces = face_detector.detectMultiScale(img, scaleFactor=1.1, minNeighbors=5,
                                           minSize=(30, 30), maxSize=(300, 300))
    print(faces.shape)  # (15, 4)

    # 绘制人脸检测框
    for x, y, width, height in faces:
        cv.rectangle(img, (x, y), (x + width, y + height), (0, 0, 255), 2, cv.LINE_8, 0)
        # 在人脸区域内检测人眼
        roi = img[y:y + height, x:x + width]  # 提取人脸
        # 检测人眼
        eyes = eye_detector.detectMultiScale(roi, scaleFactor=1.1, minNeighbors=1,
                                             minSize=(2, 2), maxSize=(80, 80))
        # 绘制人眼
        for ex, ey, ew, eh in eyes:
            cv.rectangle(img, (x+ex, y+ey), (x+ex+ew, y+ey+eh), (255, 0, 0), 2)

    # 显示图片
    cv.imshow("Haar_Cascade", img)
    # cv.imwrite("../images/imgSave4.png", img)
    cv.waitKey(0)
    cv.destroyAllWindows()


It can be seen that the detection effect for relatively normal faces is still good (you need to manually adjust the parameters of the detectMultiScale() method), but the detection effect for side faces, crooked faces, and small faces is not ideal, and the detection of eyes It can only draw frames, but cannot realize key point detection. Now I am considering using this human eye detection to convert into key points for face alignment, and then look at the effect later.

Face Detection Based on Deep Learning

OpenCV's Deep Neural Network (DNN) module provides a face detector based on deep learning. Popular deep learning frameworks are used in the DNN module, including Caffe, TensorFlow, Torch, and Darknet, among others.
OpenCV provides two pre-trained face detection models: Caffe and TensorFlow models.
The Caffe model needs to load the following two files:

  • deploy.prototxt: a configuration file that defines the model structure
  • res10_300x300_ssd_iter_140000_fp16.caffemodel: training model file containing actual layer weights

The TensorFlow model needs to load the following two files:

  • opencv_face_detector_uint8.pb: A configuration file that defines the model structure
  • opencv_face_detector.pbtxt: the training model file containing the actual layer weights

The model configuration file is provided in the "\samples\dnn\face_detector" folder of the OpenCV source code, but the model training file is not provided. You can run the download_models.py file in this folder to download the above two training model files. Or download directly from the official link https://github.com/spmallick/learnopencv/find/master (this link is too difficult to download) , download from this link: OpenCV School/OpenCV Course Information https://gitee.com/opencv_ai /opencv_tutorial_data (rely on Gitee at critical moments)
(Enter Age in the search box to search, otherwise you won't find it)  insert image description here
insert image description here
insert image description here

Performing face detection using a pre-trained model mainly involves the following steps:
(1) Call the cv2.dnn.readNetFromCaffe() or cv2.dnn.readNetFromTensorflow() function to load the model and create a detector.
(2) Call the cv2.dnn.blobFromImage() function to convert the image to be detected into image block data.
(3) Call the setInput() method of the detector to set the image block data as the input data of the model.
(4) Call the forward() method of the detector to perform calculations and obtain prediction results.
(5) Take the prediction result whose reliability is higher than the specified value as the detection result, mark the face in the original image, and output the reliability as a reference.

Single image detection

# 基于深度学习的人脸检测(脸-眼_视频)
import cv2
import numpy as np


# dnnnet = cv2.dnn.readNetFromCaffe("deploy.prototxt", "res10_300x300_ssd_iter_140000_fp16.caffemodel")
dnnnet = cv2.dnn.readNetFromTensorflow("./data/opencv_face_detector_uint8.pb", "./data/opencv_face_detector.pbtxt")

img = cv2.imread("./data/multiface1.jpeg")
h, w = img.shape[:2]
blobs = cv2.dnn.blobFromImage(img, 1.0, (300, 300), [104., 117., 123.], False, False)
dnnnet.setInput(blobs)
detections = dnnnet.forward()
faces = 0
for i in range(0, detections.shape[2]):
    confidence = detections[0, 0, i, 2]
    if confidence > 0.6:               
        faces += 1
        box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
        x1,y1,x2,y2 = box.astype("int")
        y = y1 - 10 if y1 - 10 > 10 else y1 + 10
        text = "%.3f"%(confidence * 100)+'%'
        cv2.rectangle(img, (x1, y1), (x2, y2), (255, 0, 0), 2)
        cv2.putText(img,text, (x1, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)
cv2.imshow('faces',img)
cv2.waitKey(0)
cv2.destroyAllWindows()

insert image description here
In contrast, the effect of the deep learning model is even better.

video detection

import cv2
import numpy as np


# win7系统在代码中所有的cv2.VideoCapture要加cv2.CAP_DSHOW,不然会报错
capture = cv2.VideoCapture(0, cv2.CAP_DSHOW) 
frame_width = capture.get(cv2.CAP_PROP_FRAME_WIDTH)
frame_height = capture.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = capture.get(cv2.CAP_PROP_FPS)

dnnnet = cv2.dnn.readNetFromTensorflow("./data/opencv_face_detector_uint8.pb", "./data/opencv_face_detector.pbtxt")

if capture.isOpened() is False:
    print('CAMERA ERROR !')
    exit(0)

while capture.isOpened():

    ret, frame = capture.read()

    if ret is True:

        # cv2.imshow('FRAME', frame)  # 显示捕获的帧

        h, w = frame.shape[:2]
        blobs = cv2.dnn.blobFromImage(frame, 1.0, (300, 300), [104., 117., 123.], False, False)
        dnnnet.setInput(blobs)
        detections = dnnnet.forward()
        faces = 0
        for i in range(0, detections.shape[2]):
            confidence = detections[0, 0, i, 2]
            if confidence > 0.6:
                faces += 1
                box = detections[0, 0, i, 3:7] * np.array([w, h, w, h])
                x1, y1, x2, y2 = box.astype("int")
                y = y1 - 10 if y1 - 10 > 10 else y1 + 10
                text = "%.3f" % (confidence * 100) + '%'
                cv2.rectangle(frame, (x1, y1), (x2, y2), (255, 0, 0), 2)
                cv2.putText(frame, text, (x1, y), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 1)

        cv2.imshow('faces', frame)
        k = cv2.waitKey(1)
        if k == ord('q'):
            break
    else:
        break
capture.release()
cv2.destroyAllWindows()


Put the mobile phone in front of the camera for detection, and you can see that some faces still cannot be detected, and you can consider adding liveness detection later.

Comparison of traditional vision methods and deep learning methods

For the same video file, compare the speed and the total number of detection statistics
insert image description here

OpenCV implements face recognition

Reference link:
1. Opencv's face recognition (main code reference)
2. [OpenCV-Python] 33. OpenCV's face detection and recognition - face recognition
itself has made appropriate modifications according to the blogger's code, in line with own project requirements

OpenCV has three face recognition algorithms:

Eigenfaces (Eigenfaces) face recognition is realized by PCA (Principal Component Analysis), which identifies the principal components of the face data set, and calculates the degree of divergence (0-20k) of the image area to be recognized relative to the data set, the The smaller the value, the smaller the difference, and the value of 0 means an exact match. Below 4k-5k are pretty reliable recognition.
The basic steps of EigenFaces face recognition are as follows:
(1) Call the cv2.face.EigenFaceRecognizer_create() method to create an EigenFace recognizer.
(2) Call the recognizer's train() method to train the model using known images.
(3) Call the predict() method of the recognizer to use the unknown image for recognition and confirm its identity.
The basic format of the cv2.face.EigenFaceRecognizer_create() function is as follows:

recognizer = cv2.face.EigenFaceRecognizer_create([num_components[, threshold]])

# recognizer为返回的EigenFaces识别器对象
# num_components为分析时的分量数量, 默认为0, 表示根据实际输入决定
# threshold为人脸识别时采用的阈值

The basic format of the train() method of the EigenFaces recognizer is as follows:

recognizer.train(src, label)

# src为用于训练的已知图像数组, 所有图像必须为灰度图且大小要相同
# label为标签数组, 与已知图像数组中的人脸一一对应, 同一个人的人脸标签应设置为相同值

The basic format of the predict() method of the EigenFaces recognizer is as follows:

label, confidence = recoginer.predict(testimg)

# label为返回的标签值
# confidence为返回的可信度, 表示未知人脸和模型中已知人脸之间的距离, 0表示完全匹配, 低于5000可认为是可靠的匹配结果
# test_img为未知人脸图像, 图像必须为灰度图且大小要与训练图像相同

FisherFaces face recognition is developed from PCA, using the linear discriminant analysis (Linear Discriminant Analysis, LDA) method to realize face recognition, using more complex calculations, it is easy to get more accurate results. Below 4k~5k are quite reliable recognition.
The basic steps of FisherFaces face recognition are as follows:
(1) Call the cv2.face.FisherFaceRecognizer_create() method to create a FisherFaces recognizer.
(2) Call the recognizer's train() method to train the model using known images.
(3) Call the predict() method of the recognizer to use the unknown image for recognition and confirm its identity.
In OpenCV, the cv2.face.EigenFaceRecognizer class and the cv2.face.FisherFaceRecognizer class belong to the subclasses of the cv2.face.BasicFaceRecognizer class, cv2.face.FaceRecognizer class and cv2.Algorithm class, corresponding to xxx_create(), train() The basic format and usage of methods such as predict() are the same.

Local Binary Patterns Histograms (LBPH) face recognition divides the face into small units and compares them with the corresponding units in the model to produce a histogram of the matching values ​​for each area. It allows the face area to be detected to be different from the shape and size of the image in the data set, which is more convenient and flexible. A reference value below 50 is considered good recognition, and a reference value above 80 is considered poor. The basic principles
of the LBPH algorithm to process images are as follows: (1) Take 8 pixels around the pixel x and compare it with it. If the pixel value is larger than the pixel x, take 0, otherwise take 1. Connect 0 and 1 corresponding to 8 pixels to get an 8-bit binary number, convert it to decimal, and use it as the LBP value of pixel x. (2) Process all pixels of the pixel in the same way to obtain the LBP value of the entire image, and the histogram of the image is the LBPH of the image. The basic steps of LBPH face recognition are as follows; (1) Call cv2.face.LBPHFaceRecognizer_create() method to create LBPH recognizer. (2) Call the recognizer's train() method to train the model using known images. (3) Call the predict() method of the recognizer to use the unknown image for recognition and confirm its identity.





The basic format of the cv2.face.LBPHFaceRecognizer_create() function is as follows:

recognizer = cv2.face.LBPHFaceRecognizer_create([radius[, neighbors[, grid_x[, grid_y[, threshold]]]]])

# recognizer为返回的LBPH识别器对象
# radius为邻域的半径大小
# neighbors为邻域内像素点的数量, 默认为8
# grid_x为将LBP图像划分为多个单元格时, 水平方向上的单元格数量, 默认为8
# grid_y为将LBP图像划分为多个单元格时, 垂直方向上的单元格数量, 默认为8
# threshold为人脸识别时采用的阈值

The basic format of the train() method of the LBPH recognizer is as follows:

recognizer.train(src, label)

# src为用于训练的已知图像数组, 所有图像必须为灰度图且大小要相同
# label为标签数组, 与已知图像数组中的人脸一一对应, 同一个人的人脸标签应设置为相同值

The basic format of the predict() method of the LBPH recognizer is as follows:

label, confidence = recoginer.predict(testimg)

# label为返回的标签值
# confidence为返回的可信度, 表示未知人脸和模型中已知人脸之间的距离, 0表示完全匹配, 低于50可认为是非常可靠的匹配结果
# test_img为未知人脸图像, 图像必须为灰度图且大小要与训练图像相同

Make a dataset

No matter which algorithm is used, a training set is required. It is more efficient to create training sets from videos or animations. You can download it from the Internet or write a camera capture program yourself for collection. In this experiment, we directly downloaded some star animations from the Internet, and then decomposed the animations by frame, and used the Haar cascade in OpenCV to detect the face area, and then saved all the face area as a 200X200 grayscale image. into the corresponding folder to create a training set.

from PIL import Image
import os
import cv2
import numpy as np

# GIF动图转图片
def gifSplit2Array(gif_path):
    import numpy as np
    img = Image.open(os.path.join(path, gif_path))
    for i in range(img.n_frames):
        img.seek(i)
        new = Image.new("RGBA", img.size)
        new.paste(img)
        arr = np.array(new).astype(np.uint8)  # image: img (PIL Image):
        yield arr[:, :, 2::-1]  # 逆序(RGB 转BGR), 舍弃alpha通道, 输出数组供openCV使用


# 人脸检测
def face_generate(img):
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    front_face_cascade = cv2.CascadeClassifier('./data/haarcascade_frontalface_alt2.xml')  # 检测正脸
    faces0 = front_face_cascade.detectMultiScale(gray, 1.02, 5)
    eye_cascade = cv2.CascadeClassifier('./data/haarcascade_eye_tree_eyeglasses.xml')  # 检测眼睛
    if faces0 is not None:
        for (x, y, w, h) in faces0:
            face_area = gray[y: y + h, x: x + w]  # (疑似)人脸区域
            quasi_eyes = eye_cascade.detectMultiScale(face_area, 1.03, 5, 0)  # 在人脸区域检测眼睛
            if len(quasi_eyes) == 0: continue
            quasi_eyes = tuple(
                filter(lambda x: x[2] / w > 0.18 and x[1] < 0.5 * h, quasi_eyes))  # ex,ey,ew,eh; ew/w>0.18,尺寸过滤 ,且眼睛在脸的上半部
            if len(quasi_eyes) <= 1: continue
            yield cv2.resize(face_area, (200, 200))


# 制作数据集
def get_dataset(path, gif_list):
    i = 0
    all_items = os.listdir(path)
    print(all_items)
    for item in all_items:
        name = item.split('-')[0]
        name_path = os.path.join(path, name)
        if not os.path.exists(name_path):
            os.mkdir(name_path)

    for gif in gif_list:
        print(gif)
        for img in gifSplit2Array(gif):
            for face in face_generate(img):
                cv2.imwrite("./dataset/%s/%s.pgm" % (gif.split('-')[0], i), face)
                # print(i)
                i += 1



if __name__ == '__main__':
    path = './dataset'
    gif_list = ["Yangmi-1.gif", "Yangmi-2.gif", "Yangmi-3.gif", "Liushishi-1.gif", "Liushishi-2.gif", "Liushishi-3.gif"]
    get_dataset(path, gif_list)

process result

load dataset

Put all data in one ndarray array

def load_dataset(datasetPath):
    names = []
    X = []
    y = []
    ID = 0
    for name in os.listdir(datasetPath):
        subpath = os.path.join(datasetPath, name)
        if os.path.isdir(subpath):
            names.append(name)
            for file in os.listdir(subpath):
                im = cv2.imread(os.path.join(subpath, file), cv2.IMREAD_GRAYSCALE)
                X.append(np.asarray(im, dtype=np.uint8))
                y.append(ID)
            ID += 1
    X = np.asarray(X)
    y = np.asarray(y, dtype=np.int32)
    return X, y, names

training dataset

X, y, names = load_dataset(path)
# 报错找不到face模块是因为只安装了主模块
# pip uninstall opencv-python,   pip install opencv0-contrib-python
# 创建人脸识别模型(三种识别模式)
# model = cv2.face.EigenFaceRecognizer_create() #createEigenFaceRecognizer()函数已被舍弃
# model = cv2.face.FisherFaceRecognizer_create()
model = cv2.face.LBPHFaceRecognizer_create()
model.train(X, y)

Single image test

Note: Change the last line of the face_generate() function to

yield cv2.resize(face_area, (200, 200)), x, y, w, h

start testing

    path = './dataset'
    infer_path = './data/Yangmi.jpeg'
    # gif_list = ["Yangmi-1.gif", "Yangmi-2.gif", "Yangmi-3.gif", "Liushishi-1.gif", "Liushishi-2.gif", "Liushishi-3.gif"]
    # get_dataset(path, gif_list)
    X, y, names = load_dataset(path)
    # 报错找不到face模块是因为只安装了主模块
    # pip uninstall opencv-python,   pip install opencv0-contrib-python
    # 创建人脸识别模型(三种识别模式)
    # model = cv2.face.EigenFaceRecognizer_create() #createEigenFaceRecognizer()函数已被舍弃
    # model = cv2.face.FisherFaceRecognizer_create()
    model = cv2.face.LBPHFaceRecognizer_create()
    model.train(X, y)
    img = cv2.imread(infer_path)
    for roi, x, y, w, h in face_generate(img):
        cv2.rectangle(img, (x, y), (x + w, y + h), (0, 0, 255), 2)  # 画红色矩形框标记正脸
        ID_predict, confidence = model.predict(roi)  # 预测!!!
        name = names[ID_predict]
        print("name:%s, confidence:%.2f" % (name, confidence))
        text = name if confidence < 70 else "unknow"  # 10000 for EigenFaces #70 for LBPH
        cv2.putText(img, text, (x, y - 20), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)  # 绘制绿色文字

    cv2.imshow('', img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

Test results:
Yang Mi face detection and recognition effect

Supplement: This implementation method completely uses the built-in method in opencv. At present, it only recognizes a single image, and the data set is relatively small. The recognition effect in the actual environment needs to be verified, and no face alignment program written by myself is added. , I hope to combine it later to improve the recognition effect

Guess you like

Origin blog.csdn.net/weixin_42149550/article/details/131474284