Opencv project combat: 10 facial feature extraction and adding filters

1. Effect display

This is a display with the camera turned on, and the picture of the mobile phone used. I am very embarrassed to show up. You can see the effect.

Let's take a look at the form of the picture:

 Very good death Barbie powder, the effect is also quite good, and the picture is not like the camera, it shows better.


2. Project introduction

In this project, I will take the dlib and shape_predictor_68_face_landmarks.dat files, add a mask to the image, and change the color of the lips. If you want to modify other parts, the method is the same. In addition, I will also Let the pictures show the 68 surfaces of the face, so stay tuned!


3. Project construction

The construction of this project only needs to find a file downloaded from the official website.

Click here for ​Index of /files (dlib.net)

You will see this page:

Next, we pull to the bottom and download the file.

Please note that the file you downloaded is a compressed package and can only be used after decompression. The above is the source of the required documents.

Since the upload on GitHub cannot exceed 25MB, you need to download it yourself in this step.

 

 1.png is what we want to detect in the image, and 2.jpg is what we detect in the video.


4. Code display and explanation

import cv2
import numpy as np
import dlib

webcam = False
cap = cv2.VideoCapture(0)
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

def empty(a):
    pass

cv2.namedWindow("BGR")
cv2.resizeWindow("BGR", 640, 240)
cv2.createTrackbar("Blue", "BGR", 153, 255, empty)
cv2.createTrackbar("Green", "BGR", 0, 255, empty)
cv2.createTrackbar("Red", "BGR", 137, 255, empty)

def createBox(img, points, scale=5, masked=False, cropped=True):
    if masked:
        mask = np.zeros_like(img)
        mask = cv2.fillPoly(mask, [points], (255, 255, 255))
        img = cv2.bitwise_and(img, mask)
        # cv2.imshow('Mask',mask)

    if cropped:
        bbox = cv2.boundingRect(points)
        x, y, w, h = bbox
        imgCrop = img[y:y + h, x:x + w]
        imgCrop = cv2.resize(imgCrop, (0, 0), None, scale, scale)
        cv2.imwrite("Mask.jpg", imgCrop)
        return imgCrop
    else:
        return mask


while True:

    if webcam:
        success, img = cap.read()
    else:
        img = cv2.imread('1.png')
    img = cv2.resize(img, (0, 0), None, 0.80, 0.80)
    imgOriginal = img.copy()
    imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

    faces = detector(imgOriginal)
    for face in faces:
        x1, y1 = face.left(), face.top()
        x2, y2 = face.right(), face.bottom()
        # imgOriginal=cv2.rectangle(imgOriginal, (x1, y1), (x2, y2), (0, 255, 0), 2)
        landmarks = predictor(imgGray, face)
        myPoints = []
        for n in range(68):
            x = landmarks.part(n).x
            y = landmarks.part(n).y
            myPoints.append([x, y])
            # cv2.circle(imgOriginal, (x, y), 5, (50,50,255),cv2.FILLED)
            # cv2.putText(imgOriginal,str(n),(x,y-10),cv2.FONT_HERSHEY_COMPLEX_SMALL,0.8,(0,0,255),1)
        # print(myPoints)
        if len(myPoints) != 0:
            try:
                myPoints = np.array(myPoints)
                imgEyeBrowLeft = createBox(img, myPoints[17:22])
                imgEyeBrowRight = createBox(img, myPoints[22:27])
                imgNose = createBox(img, myPoints[27:36])
                imgLeftEye = createBox(img, myPoints[36:42])
                imgRightEye = createBox(img, myPoints[42:48])
                imgLips = createBox(img, myPoints[48:61])
                cv2.imshow('Left Eyebrow', imgEyeBrowLeft)
                cv2.imshow('Right Eyebrow', imgEyeBrowRight)
                cv2.imshow('Nose', imgNose)
                cv2.imshow('Left Eye', imgLeftEye)
                cv2.imshow('Right Eye', imgRightEye)
                cv2.imshow('Lips', imgLips)

                maskLips = createBox(img, myPoints[48:61], masked=True, cropped=False)
                imgColorLips = np.zeros_like(maskLips)
                b = cv2.getTrackbarPos("Blue", "BGR")
                g = cv2.getTrackbarPos("Green", "BGR")
                r = cv2.getTrackbarPos("Red", "BGR")

                imgColorLips[:] = b, g, r
                imgColorLips = cv2.bitwise_and(maskLips, imgColorLips)
                imgColorLips = cv2.GaussianBlur(imgColorLips, (7, 7), 10)

                imgOriginalGray = cv2.cvtColor(imgOriginal, cv2.COLOR_BGR2GRAY)
                imgOriginalGray = cv2.cvtColor(imgOriginalGray, cv2.COLOR_GRAY2BGR)
                imgColorLips = cv2.addWeighted(imgOriginalGray, 1, imgColorLips, 0.4, 0)
                cv2.imshow('BGR', imgColorLips)

            except:
                pass

    cv2.imshow("Originial", imgOriginal)
    if cv2.waitKey(1) == 27:
        break

Today's explanation is different. This code is after the final effect. If you want to understand the truth, you need to rely on the content that has been commented out. Fortunately, I did it this afternoon. Let's go through the ideas with you.

If you look at my title, you will know that it is divided into two parts; please note that if you have not seen my actual project (1 message) Opencv project combat: 07 Face recognition and attendance system_Summer is the blog of ice black tea -CSDN blog , please take a look at this article. You can also jump directly to here to view the download method of dlib (1 message) Python3.7 The easiest way to solve the problem of downloading dlib and face_recognition - Summer is the blog of ice black tea - CSDN Blog

part1 facial feature extraction

import cv2
import numpy as np
import dlib

img = cv2.imread('1.png')
img = cv2.resize(img, (0, 0), None, 0.80, 0.80)
imgOriginal = img.copy()

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector(imgOriginal)

for face in faces:
    x1, y1 = face.left(), face.top()
    x2, y2 = face.right(), face.bottom()
    imgOriginal=cv2.rectangle(imgOriginal, (x1, y1), (x2, y2), (0, 255, 0), 2)
    landmarks = predictor(imgGray, face)
    myPoints = []
    for n in range(68):
        x = landmarks.part(n).x
        y = landmarks.part(n).y
        myPoints.append([x, y])
        cv2.circle(imgOriginal, (x, y), 5, (50,50,255),cv2.FILLED)
        cv2.putText(imgOriginal,str(n),(x,y-10),cv2.FONT_HERSHEY_COMPLEX_SMALL,0.8,(0,0,255),1)
    print(myPoints)

cv2.imshow("Originial", imgOriginal)
cv2.waitKey(0)

Before explaining, let's take a look at its renderings:

It's pretty cool, as you can see, we successfully recognized the face and got 68 facial features.

  • First of all, the introduction of the package, reading the image, modifying the size of the image, and copying the original image are all fairly basic operations. The detector variable is used to accept and return the default face detector to find the face in the image;
  • Next, we need a predictor variable, use dlib.shape_predictor to get the file we downloaded that detects the 68 features of the face. Pass the copied image to the detector, and detect our bounding box in the for loop. The x and y in the internal embedding are a single part of the object as the dlib point, draw this point, and mark its value;
  • Finally, show our final image.

part2 extracts a part of the face

This time, let's look at our renderings first. Our purpose is to extract the corresponding face parts. 

import cv2
import numpy as np
import dlib

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

def createBox(img, points, scale=3):
    bbox = cv2.boundingRect(points)
    x, y, w, h = bbox
    imgCrop = img[y:y + h, x:x + w]
    imgCrop = cv2.resize(imgCrop, (0, 0), None, scale, scale)
    return imgCrop

img = cv2.imread('1.png')
img = cv2.resize(img, (0, 0), None, 0.80, 0.80)
imgOriginal = img.copy()
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector(imgOriginal)

for face in faces:
    x1, y1 = face.left(), face.top()
    x2, y2 = face.right(), face.bottom()
    imgOriginal=cv2.rectangle(imgOriginal, (x1, y1), (x2, y2), (0, 255, 0), 2)
    landmarks = predictor(imgGray, face)
    myPoints = []
    for n in range(68):
        x = landmarks.part(n).x
        y = landmarks.part(n).y
        myPoints.append([x, y])
        # cv2.circle(imgOriginal, (x, y), 5, (50,50,255),cv2.FILLED)
        # cv2.putText(imgOriginal,str(n),(x,y-10),cv2.FONT_HERSHEY_COMPLEX_SMALL,0.8,(0,0,255),1)
    myPoints = np.array(myPoints)
    imgEyeBrowLeft = createBox(img, myPoints[17:22])
    imgEyeBrowRight = createBox(img, myPoints[22:27])
    imgNose = createBox(img, myPoints[27:36])
    imgLeftEye = createBox(img, myPoints[36:42])
    imgRightEye = createBox(img, myPoints[42:48])
    imgLips = createBox(img, myPoints[48:61])
    cv2.imshow('Left Eyebrow', imgEyeBrowLeft)
    cv2.imshow('Right Eyebrow', imgEyeBrowRight)
    cv2.imshow('Nose', imgNose)
    cv2.imshow('Left Eye', imgLeftEye)
    cv2.imshow('Right Eye', imgRightEye)
    cv2.imshow('Lips', imgLips)
    
cv2.imshow("Originial", imgOriginal)
cv2.waitKey(0)

The renderings show only the left eye, and the features we extract in our code are left and right eyebrows; left and right eyes; nose, lips, etc. We will continue to expand the newly written createBox function here, so I will skip it here.

part3 Create a mask for the face

We know that we need an accurate position when coloring, not a rectangle, but a polygon, which requires us to know the exact point of the lips. For convenience, let's look at the mask of the lips.

import cv2
import numpy as np
import dlib

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

def createBox(img, points, scale=3):
    mask = np.zeros_like(img)
    mask = cv2.fillPoly(mask, [points], (255, 255, 255))
    img = cv2.bitwise_and(img, mask)      
    cv2.imshow('Mask',mask)

    bbox = cv2.boundingRect(points)
    x, y, w, h = bbox
    imgCrop = img[y:y + h, x:x + w]
    imgCrop = cv2.resize(imgCrop, (0, 0), None, scale, scale)
    return imgCrop

img = cv2.imread('1.png')
img = cv2.resize(img, (0, 0), None, 0.80, 0.80)
imgOriginal = img.copy()
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector(imgOriginal)

for face in faces:
    x1, y1 = face.left(), face.top()
    x2, y2 = face.right(), face.bottom()
    imgOriginal=cv2.rectangle(imgOriginal, (x1, y1), (x2, y2), (0, 255, 0), 2)
    landmarks = predictor(imgGray, face)
    myPoints = []
    for n in range(68):
        x = landmarks.part(n).x
        y = landmarks.part(n).y
        myPoints.append([x, y])
    myPoints = np.array(myPoints)
    imgLips = createBox(img, myPoints[48:61])
    cv2.imshow('Lips', imgLips)

cv2.imshow("Originial", imgOriginal)
cv2.waitKey(0)

 Its rendering is as follows:

 Modify the createBox function in it

cv2.imshow('Mask',img)

 We will get the following picture, and we will get the effect we want very successfully. Here I mainly want to show the relationship between mask and bit operation. I have learned ps and pr, so I still have a good understanding of masks. And the concept of masks was really unheard of when I was a beginner, so I'll mention it here.

 part4 colorize the original image

import cv2
import numpy as np
import dlib

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

def createBox(img, points, scale=3, masked=False, cropped=True):
    if masked:
        mask = np.zeros_like(img)
        mask = cv2.fillPoly(mask, [points], (255, 255, 255))
        img = cv2.bitwise_and(img, mask)
        # cv2.imshow('Mask',mask)

    if cropped:
        bbox = cv2.boundingRect(points)
        x, y, w, h = bbox
        imgCrop = img[y:y + h, x:x + w]
        imgCrop = cv2.resize(imgCrop, (0, 0), None, scale, scale)
        cv2.imwrite("Mask.jpg", imgCrop)
        return imgCrop
    else:
        return mask

img = cv2.imread('1.png')
img = cv2.resize(img, (0, 0), None, 0.80, 0.80)
imgOriginal = img.copy()
imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = detector(imgOriginal)

for face in faces:
    x1, y1 = face.left(), face.top()
    x2, y2 = face.right(), face.bottom()
    landmarks = predictor(imgGray, face)
    myPoints = []
    for n in range(68):
        x = landmarks.part(n).x
        y = landmarks.part(n).y
        myPoints.append([x, y])

    myPoints = np.array(myPoints)
    maskLips = createBox(img, myPoints[48:61], masked=True, cropped=False)

    imgColorLips = np.zeros_like(maskLips)
    imgColorLips[:] = 153, 0, 158
    imgColorLips = cv2.bitwise_and(maskLips, imgColorLips)     
    imgColorLips = cv2.GaussianBlur(imgColorLips, (7, 7), 10)  
    imgColorLips = cv2.addWeighted(imgOriginal, 1, imgColorLips, 0.4, 0)   
    cv2.imshow('Color', imgColorLips)
    cv2.imshow('Lips', maskLips)

cv2.imshow("Originial", imgOriginal)
cv2.waitKey(0)

I'll highlight these:

imgColorLips = cv2.bitwise_and(maskLips, imgColorLips)     #用位运算将蒙版与纯颜色背景板结合起来
imgColorLips = cv2.GaussianBlur(imgColorLips, (7, 7), 10)   #添加高斯模糊,不让图像变得生硬
imgColorLips = cv2.addWeighted(imgOriginal, 1, imgColorLips, 0.4, 0)   #配置权重,使颜色与嘴唇更加融合

See the notes for details, we get the renderings below.

  Modify and add the following code

imgOriginalGray = cv2.cvtColor(imgOriginal, cv2.COLOR_BGR2GRAY)
imgOriginalGray = cv2.cvtColor(imgOriginalGray, cv2.COLOR_GRAY2BGR)
imgColorLips = cv2.addWeighted(imgOriginalGray, 1, imgColorLips, 0.4, 0)

 In order to better observe the coloring of the lips, we convert the original image into a grayscale image, and because we want to colorize the image, we need three channels, so it is converted into a gray image with three channels.

Let's take a look at its effect:

 cool!!! The effect is very obvious, and we can observe our project better.

part5 Add track bar to modify color in real time

Then the completion of this step is the code I showed at the beginning. When using the function of cv2.createTrackbar, you need to add a function and you can pass it directly.

Then after the above explanation, maybe you are quite clear about the above, so I won't go into details here.


5. Project resources

GitHub:Opencv-project-training/Opencv project training/10 Facial Landmarks and Face Filter at main · Auorui/Opencv-project-training · GitHub


6. Project summary

It seems that it has not been updated for a long time. Recently, I am supplementing data analysis. Since I have to write a blog after learning, it will be a little slow. I just placed an order yesterday to buy watermelon book, pumpkin book and Li Hang's statistical learning method. To be honest, although I have the electronic version of these resources, I can’t look at it, and my eyes are uncomfortable. I still prefer the feeling of paper plate. I hope I get started with machine learning sooner. I almost forgot my high numbers. I can only say that I still have impressions. If nothing else, I must work hard to nibble on these knowledge points.

Hope you have fun with this project, otherwise I'll see you in the next project! ! !

Guess you like

Origin blog.csdn.net/m0_62919535/article/details/126994472