OpenCV realizes face detection and 68-point positioning

Face comparison is a commonly used function now. For example, the face of a taxi driver is compared with the driver's license photo, and the face of an entrant in the access control system is compared with the face in the face library. To achieve face comparison, the first thing to be achieved is face detection. In a picture taken by the camera, the position of the face is correctly detected and the face is extracted.

table of Contents

1 Principle Prophet

        1.1 68-point calibration and OpenCV drawing points

        1.2 Coding design ideas

        1.3 Introduction to OpenCV drawing function

2 Environmental description

3 Experiment content

4 steps in detail

        4.1 OpenCV realizes face detection

        4.2 68-point positioning of the face


1 Principle Prophet

1.1 68-point calibration and OpenCV drawing points

Taking into account the free and open source, OpenCV can achieve this function well.
Here we use OpenCV to provide a good face classification model xml: haarcascade_frontalface_alt_tree.xml.
At the same time, use Dlib's official face recognition predictor "shape_predictor_68_face_landmarks.dat" for 68-point calibration (using OpenCV for image processing, draw 68 points on the face, and mark the serial number).

Note : OpenCV face classification model xml and Dlib face recognition predictor download address
https://pan.baidu.com/s/1gZfYupoW9Zo_2lVV524cWA 
extraction code: w536 

The content of 68-point positioning of the face is mainly as follows: 68-point calibration and OpenCV drawing

  • 68-point calibration: dlib provides a trained model that can recognize 68 feature points of a face
  • OpenCV draw points: the circle function cv2.circle() and the output string function cv2.putText()

1.2 Coding design ideas

  • Call the dlib library for face recognition, call the predictor "shape_predictor_68_face_landmarks.dat"
  • Perform 68-point calibration and save 68-point coordinates
  • Use cv2.circle to draw 68 points
  • Use cv2.putText() function to draw numbers 1-68

1.3 Introduction to OpenCV drawing function

  1. Draw a circle cv2.circle( img, (p1,p2), r, (255,255,255))
    img picture object
    (p1,p2) circle center coordinates
    r radius
    (255,255,255)  color array
  2. Output characters cv2.putText( img,"test", (p1,p2), font, 4, (255,255,255), 2, cv2, LINE_AA)
    img image object
    "test"   needs to print the characters text (for numbers, you can use str( ) Converted into characters)
    (p1, p2) coordinate textOrg
    font represents the font fontFace (note here font = cv2.FONT_HERSHEY_SIMPLEX)
    4 represents the font size fontScale
    (255,255,255) color array
    2 line width thickness
    LINE_AA line type line_type;

    about the color array: (255,255,255) , (Blue, green, red), each value is 0-255. For example: blue (255,0,0), purple (255,0,255)

2 Environmental description

  • Linux Ubuntu 16.04
  • Python 3.6
  • PyCharm Community2018
  • Opencv-python 3.4.0.12

3 Experiment content

  1. Use haarcascade_frontalface_alt_tree.xml face classification model to detect faces.
  2. Use Dlib's official face recognition predictor "shape_predictor_68_face_landmarks.dat" for 68-point calibration, use OpenCV for image processing, draw 68 points on the face, and mark the serial number.

4 steps in detail

4.1 OpenCV realizes face detection

First convert the picture to gray: use OpenCV's cvtColor() to convert the picture color.

import cv2  
   
filepath = "/data/opencv12/mv.jpg"  
img = cv2.imread(filepath)  
# 转换灰色,目的是在人脸检测时排除色彩的干扰  
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  
# 显示图像  
cv2.imshow("original", img)  
cv2.imshow("Image", gray)  
cv2.waitKey(0)  
cv2.destroyAllWindows() 

Then use the training classifier to find the face: Before using OpenCV's face detection, a face training model is required. The format is xml. This experiment uses the good face classification model xml provided by OpenCV: haarcascade_frontalface_alt_tree.xml.
The Haar feature classifier is an XML file that describes the Haar feature values ​​of various parts of the human body. Including human face, eyes, lips, etc.
Face detection in OpenCV uses the detectMultiScale function. It can detect all the faces in the picture, and save the coordinates and size of each face with a vector (represented by a rectangle).

# 加载OpenCV人脸识别分类器  
face_detector = cv.CascadeClassifier("haarcascade_frontalface_alt_tree.xml")  
# 调用函数识别人脸  
faceRects = classifier.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=3, minSize=(32, 32))  

Finally, draw a rectangle on the picture: Use OpenCV's rectangle() to draw a rectangle.

color = (0, 255, 0)    
if len(faceRects):  # 大于0则检测到人脸  
    for faceRect in faceRects:  # 单独框出每一张人脸  
        x, y, w, h = faceRect # x、y表示坐标;w、h表示矩形宽和高  
        # 框出人脸  
        cv2.rectangle(img, (x, y), (x + h, y + w), color, 2)  
        # 左眼  
        cv2.circle(img, (x + w // 4, y + h // 4 + 30), min(w // 8, h // 8),color)  
        #右眼  
        cv2.circle(img, (x + 3 * w // 4, y + h // 4 + 30), min(w // 8, h // 8),color)  
        #嘴巴  
        cv2.rectangle(img, (x + 3 * w // 8, y + 3 * h // 4),(x + 5 * w // 8, y + 7 * h // 8), color)  

The complete code for OpenCV to achieve face detection is as follows:

import cv2  
      
filepath = "/data/opencv12/mv.jpg"  
img = cv2.imread(filepath)    
cv2.imshow("original", img)    
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)    
       
classifier = cv2.CascadeClassifier("/data/opencv12/haarcascade_frontalface_alt_tree.xml")  
faceRects = classifier.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=3, minSize=(32, 32))  
      
color = (0, 255, 0)    
if len(faceRects):    
    for faceRect in faceRects:    
        x, y, w, h = faceRect   
        cv2.rectangle(img, (x, y), (x + h, y + w), color, 2)  
        cv2.circle(img, (x + w // 4, y + h // 4 + 30), min(w // 8, h // 8),color)  
        cv2.circle(img, (x + 3 * w // 4, y + h // 4 + 30), min(w // 8, h // 8),color)  
        cv2.rectangle(img, (x + 3 * w // 8, y + 3 * h // 4),(x + 5 * w // 8, y + 7 * h // 8), color)  
      
cv2.imshow("image", img)    
cv2.waitKey(0)  
cv2.destroyAllWindows()  

OpenCV realizes the running results of face detection as shown below.

4.2 68-point positioning of the face

In addition to using OpenCV to achieve face detection, you can also use the Dlib library for image face detection that is more accurate than OpenCV to achieve 68-point face positioning.

First import the library that needs to be called.

import dlib                     #人脸识别的库dlib  
from PIL import Image           #图像处理的库PIL   
import numpy as np              #数据处理的库numpy  
import cv2                      #图像处理的库OpenCv  

Then read the picture and convert the picture to grayscale.

path = "/data/opencv12/mv.jpg"  
img = cv2.imread(path)  
cv2.imshow("original", img)  
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  

Next, by reading the training model, 68 feature points of the face can be detected.

# 人脸分类器  
detector = dlib.get_frontal_face_detector()  
# 获取人脸检测器  
predictor = dlib.shape_predictor("/data/opencv12/shape_predictor_68_face_landmarks.dat")  

Finally, traverse all the detection points on the face and mark them, and mark the numbers 1-68.

rects = detector(gray, 0)  
for i in range(len(rects)):  
     landmarks = np.matrix([[p.x, p.y] for p in predictor(img, rects[i]).parts()]) # 寻找人脸的68个标定点  
     # 遍历所有点,打印出其坐标,并圈出来,并标注1-68数字  
     for idx, point in enumerate(landmarks):  
         pos = (point[0, 0], point[0, 1])  
         # 利用cv2.circle给每个特征点画一个圈,共68个  
         cv2.circle(img, pos, 3, color=(0, 255, 0))  
         # 利用cv2.putText输出1-68  
         font = cv2.FONT_HERSHEY_SIMPLEX  
         cv2.putText(img, str(idx+1), pos, font, 0.3, (0, 0, 255), 1, cv2.LINE_AA)  

The complete code for 68-point face positioning is as follows:

import cv2  
import dlib  
import numpy as np  
path = "/data/opencv12/mv.jpg"  
img = cv2.imread(path)  
cv2.imshow("original", img)  
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)  
  
  
detector = dlib.get_frontal_face_detector()  
predictor = dlib.shape_predictor("/data/opencv12/shape_predictor_68_face_landmarks.dat")  
rects = detector(gray, 0)  
for i in range(len(rects)):  
     landmarks = np.matrix([[p.x, p.y] for p in predictor(img, rects[i]).parts()])  
     for idx, point in enumerate(landmarks):  
         pos = (point[0, 0], point[0, 1])  
         cv2.circle(img, pos, 3, color=(0, 255, 0))  
         font = cv2.FONT_HERSHEY_SIMPLEX  
         cv2.putText(img, str(idx+1), pos, font, 0.3, (0, 0, 255), 1, cv2.LINE_AA)  
cv2.imshow("imgdlib", img)  
cv2.waitKey(0)  
cv2.destroyAllWindows() 

The results of the 68-point face positioning operation are shown below.
It can be found that dlib detects the face including eyes, nose, mouth, and the picture marked with 68 points is shown below, and can accurately locate and detect the face.


Welcome to leave a message, learn and communicate together~

Thanks for reading

END

Guess you like

Origin blog.csdn.net/IT_charge/article/details/112329944