Python realizes the detection of key parts of the face (with source code)

facial feature extraction

This article mainly uses the face feature recognition function in the dlib library.

The dlib library uses 68 feature points to mark the face features, and obtains the corresponding facial features through the feature points of the corresponding sequence. The figure below shows 68 feature points. For example, we want to mention

Take the eye features and obtain 37 to 46 feature points.

insert image description here

Add a similar mapping in the code, directly by calling the corresponding part.

Python学习交流Q群:906715085##3
FACIAL_LANDMARKS_68_IDXS = OrderedDict([  
("mouth", (48, 68)),
  ("right_eyebrow", (17, 22)), 
   ("left_eyebrow", (22, 27)),
     ("right_eye", (36, 42)),  
     ("left_eye", (42, 48)), 
      ("nose", (27, 36)), 
  ("jaw", (0, 17))])FACIAL_LANDMARKS_5_IDXS = OrderedDict([ 
   ("right_eye", (2, 3)),  
   ("left_eye", (0, 1)),  
   ("nose", (4))

Data preprocessing and model loading

我们按照输入图像的要求对图像进行变形处理,这里需要转化为灰度图,加载get_frontal_face_detector模型和特征库进行检测。
Python学习交流Q群:906715085###
#加载人脸检测与关键点定位
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(args["shape_predictor"])

#读取输入数据,预处理
image = cv2.imread(args["image"])
(h, w) = image.shape[:2]
width=500
r = width / float(w)
dim = (width, int(h * r))
image = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

#人脸检测rects = detector(gray, 1)

Iterate over each face keypoint

Predict the feature points of the extracted face, locate the key parts of the face, and convert it into the form of np_array.

shape = predictor(gray, rect)
shape = shape_to_np(shape)

Traverse each part, make a copy for operation, and mark the currently detected category on the image.

#遍历每一个部分
for (name, (i, j)) in FACIAL_LANDMARKS_68_IDXS.items():  
clone = image.copy() 
 cv2.putText(clone, name, (10, 30),
  cv2.FONT_HERSHEY_SIMPLEX,    0.7, (0, 0, 255), 2)

Draw feature points on the image according to the identified locations.

for (x, y) in shape[i:j]:      
cv2.circle(clone, (x, y), 3, (0, 0, 255), -1)

The facial features are extracted.

(x, y, w, h) = cv2.boundingRect(np.array([shape[i:j]]))
roi = image[y:y + h, x:x + w]
(h, w) = roi.shape[:2]
width=250
r = width / float(w)
dim = (width, int(h * r))
roi = cv2.resize(roi, dim, interpolation=cv2.INTER_AREA)

Finally show it.

cv2.imshow("ROI", roi)
cv2.imshow("Image", clone)
cv2.waitKey(0)

final effect

original image
insert image description here

face detection

insert image description here

All facial features

insert image description here

insert image description here

Inspection of key parts

insert image description here

insert image description here

insert image description here
insert image description here
insert image description here

insert image description here

Guess you like

Origin blog.csdn.net/xff123456_/article/details/124214086