Python | Face Recognition System — Liveness Detection

Blog Summary: Python | Face Recognition System — Blog Index

GitHub address: Su-Face-Recognition

Note: Please refer to before reading this blog

Tool Installation, Environment Configuration: Python | Face Recognition System—Introduction

UI Interface Design: Python | Face Recognition System — UI Interface Design

UI event processing: Python | Face recognition system - UI event processing

Camera screen display: Python | Face recognition system - camera screen display

1. Basic idea

The code uses silent liveness detection + interactive liveness detection combined with judgment.

        Silent liveness detection uses Baidu API, and the confidence level returned by the interface is used to judge whether it passes.

        Interactive liveness detection requires the user to complete a certain action to determine whether it is passed.

2. Initialization

        Initialize the isFaceRecognition_flag flag to judge the current face recognition status.

        The button is bound to the recognize_face_judge method of the face recognition judge.

        The rest of the attributes will be discussed later.

    def __init__(self, parent=None):
        super(UserMainWindow, self).__init__(parent)
        self.setupUi(self)

        self.isFaceDetection_flag = False  # 是否打开活体检测标志
        self.biopsy_testing_button.clicked.connect(self.detect_face_judge)  # 活体检测

        self.detector = None  # 人脸检测器
        self.predictor = None  # 特征点检测器
        # 闪烁阈值
        self.EAR_THRESH = None
        self.MOUTH_THRESH = None
        # 总闪烁次数
        self.eye_flash_counter = None
        self.mouth_open_counter = None
        self.turn_left_counter = None
        self.turn_right_counter = None
        # 连续帧数阈值
        self.EAR_CONSTANT_FRAMES = None
        self.MOUTH_CONSTANT_FRAMES = None
        self.LEFT_CONSTANT_FRAMES = None
        self.RIGHT_CONSTANT_FRAMES = None
        # 连续帧计数器
        self.eye_flash_continuous_frame = 0
        self.mouth_open_continuous_frame = 0
        self.turn_left_continuous_frame = 0
        self.turn_right_continuous_frame = 0
        # 字体颜色
        self.text_color = (255, 0, 0)
        # 百度API
        self.api = BaiduApiUtil

3. Judgment

    # 活体检测判断器
    def detect_face_judge(self):
        if not self.cap.isOpened():
            QMessageBox.information(self, "提示", self.tr("请先打开摄像头"))
        else:
            if not self.isFaceDetection_flag:
                self.isFaceDetection_flag = True
                self.biopsy_testing_button.setText("关闭活体检测")
                self.detect_face()
                self.biopsy_testing_button.setText("活体检测")
                self.isFaceDetection_flag = False
            elif self.isFaceDetection_flag:
                self.isFaceDetection_flag = False
                self.remind_label.setText("")
                self.biopsy_testing_button.setText("活体检测")
                self.show_camera()

4. Detector

        First, determine whether the current environment is connected to the Internet (the network detection code is in the BaiduApiUtil tool class, and the code of the tool class is below), and perform silent liveness detection + interactive liveness detection (network detection) when connected to the network, otherwise perform a separate interactive liveness detection (local detection).

    # 百度API
    self.api = BaiduApiUtil

    ... ...

    # 整体活体检测
    def detect_face(self):
        if self.api.network_connect_judge():
            if not self.detect_face_network():
                return False
        if not self.detect_face_local():
            return False
        return True


    # 联网活体检测
    def detect_face_network(self):
        ... ... 


    # 本地活体检测
    def detect_face_local(self):
        ... ...

1. Silent liveness detection

        Silent liveness detection uses the Baidu Smart Cloud interface. We create a tool class BaiduApiUtil, and write codes for network connections, requests, and analysis results, etc. in the tool class. Then use it in your UI logic code.

For interface details, please refer to  Baidu Smart Cloud-Interface Details

For code examples, please refer to  Baidu Smart Cloud - Code Examples

Note: Before using, you need to register a Baidu Smart Cloud account, apply for an interface (the interface is free), and obtain your own API_KEY and SECRET_KEY

        (1) Tool class BaiduApiUtil

                 a. Network judgment

def network_connect_judge():
    """
    联网判断
    :return: 是否联网
    """
    ret = os.system("ping baidu.com -n 1")
    return True if ret == 0 else False

               b. Obtain an access token

Save the parameters such as API_KEY for the Baidu interface to the .conf configuration file (the configuration file is in the conf directory of the current project), and then use ConfigParser to read and use it.

[baidu_config]
app_id = XXXXXXXXXXXXXXXXXXXXXXXX
secret_key = XXXXXXXXXXXXXXXXXXXXXXXX
def get_access_token():
    """
    获取访问令牌
    :return: 访问令牌
    """
    conf = ConfigParser()
    path = os.path.join(os.path.dirname(__file__))
    conf.read(path[:path.rindex('util')] + "conf\\setting.conf", encoding='gbk')

    API_KEY = conf.get('baidu_config', 'app_id')
    SECRET_KEY = conf.get('baidu_config', 'secret_key')

    url = "https://aip.baidubce.com/oauth/2.0/token"
    params = {
    
    "grant_type": "client_credentials", "client_id": API_KEY, "client_secret": SECRET_KEY}
    return str(requests.post(url, params=params).json().get("access_token"))

                c. Interface call

Note: When making an API request, the uploaded image format is in base64 format, and the image we pass in is in jpg format, so format conversion is required. Conversion by base64.b64encode() method

def face_api_invoke(path):
    """
    人脸 API 调用
    :param path: 待检测的图片路径
    :return: 是否通过静默人脸识别
    """
    with open(path, 'rb') as f:
        img_data = f.read()
        base64_data = base64.b64encode(img_data)
        base64_str = base64_data.decode('utf-8')
    url = "https://aip.baidubce.com/rest/2.0/face/v3/faceverify?access_token=" + get_access_token()
    headers = {
    
    'Content-Type': 'application/json'}
    payload = json.dumps(([{
        "image": base64_str,
        "image_type": "BASE64"
    }]))
    response = requests.request("POST", url, headers=headers, data=payload)
    print(response)
    result = json.loads(response.text)
    if result["error_msg"] == "SUCCESS":
        frr_1e_4 = result["result"]["thresholds"]["frr_1e-4"]
        frr_1e_3 = result["result"]["thresholds"]["frr_1e-3"]
        frr_1e_2 = result["result"]["thresholds"]["frr_1e-2"]
        face_liveness = result["result"]["face_liveness"]

        if face_liveness >= frr_1e_2:
            return True
        elif frr_1e_3 <= face_liveness <= frr_1e_2:
            return True
        elif face_liveness <= frr_1e_4:
            return False

         (2) Logic call of the main user interface

    # 文件目录
    curPath = os.path.abspath(os.path.dirname(__file__))
    # 项目根路径
    rootPath = curPath[:curPath.rindex('logic')] # logic为存放用户界面逻辑代码的文件夹名
    # 配置文件夹路径
    CONF_FOLDER_PATH = rootPath + 'conf\\'
    # 图片文件夹路径
    PHOTO_FOLDER_PATH = rootPath + 'photo\\'
    # 数据文件夹路径
    DATA_FOLDER_PATH = rootPath + 'data\\'

    ... ...

    # 联网活体检测
    def detect_face_network(self):
        while self.cap.isOpened():
            ret, frame = self.cap.read()
            frame_location = face_recognition.face_locations(frame)
            if len(frame_location) == 0:
                QApplication.processEvents()
                self.remind_label.setText("未检测到人脸")
            else:
                global PHOTO_FOLDER_PATH
                shot_path = PHOTO_FOLDER_PATH + datetime.now().strftime("%Y%m%d%H%M%S") + ".jpg"
                self.show_image.save(shot_path)
                QApplication.processEvents()
                self.remind_label.setText("正在初始化\n请稍后")
                # 百度API进行活体检测
                QApplication.processEvents()
                if not self.api.face_api_invoke(shot_path):
                    os.remove(shot_path)
                    QMessageBox.about(self, '警告', '未通过活体检测')
                    self.remind_label.setText("")
                    return False
                else:
                    os.remove(shot_path)
                    return True

            show_video = cv2.cvtColor(cv2.resize(frame, (self.WIN_WIDTH, self.WIN_HEIGHT)), cv2.COLOR_BGR2RGB)
            self.show_image = QImage(show_video.data, show_video.shape[1], show_video.shape[0], QImage.Format_RGB888)
            self.camera_label.setPixmap(QPixmap.fromImage(self.show_image))

2. Interactive liveness detection

        (1) Basic principles

        The shape_predictor_68_face_landmarks model of the open source framework dlib is used to detect and locate the 68 feature points of the face. The liveness detection of this system mainly detects multiple actions such as shaking the head left and right, blinking, opening the mouth, nodding, etc., so the nose[32,36], left eye[37,42], right eye[43,48], upper A collection of feature points for multiple parts such as the inner edge of the lip [66,68].

        The basic principle of blink detection is to calculate the eye aspect ratio EAR (Eye Aspect Ratio) value. When the human eye is open, the EAR fluctuates up and down a certain value. When the human eyes are closed, the EAR will drop rapidly, which is close to zero in theory, but generally fluctuates around 0.25 in practice, so the threshold of this system is set at 0.25.

        The calculation formula of EAR is as follows:

                 Among them, p1~p5 are the 6 mark points of the current eye, as shown in the figure below:

        (2) Implementation principle

        Continuously read each frame returned by the camera, and calculate the EAR value of the frame. When the EAR is lower than the threshold, the current frame count is automatically incremented by one. When the continuous frame count exceeds 2 frames and the EAR value is greater than the threshold, the action is regarded as a blink

        In the same way, the processing of opening the mouth, shaking the head left, and shaking the head right is similar. Firstly, obtain the marker points of the current organ through dlib, calculate its aspect ratio, and compare it with the pre-specified threshold of the system. When the aspect ratio is smaller than the threshold, the continuous frame counter is automatically incremented by one. When the continuous frame counter value exceeds the specified value, it is judged that this action is a valid action and recorded.

        Due to the need for the user to complete various actions, paper or electronic photos basically cannot pass this liveness test.

        But with video, it is possible for an attacker to trick the system by using a pre-recorded video of a sequence of actions being performed. For this situation, the team's response measures are as follows:

        The system requires the user to complete the actions of shaking the head left, shaking the head right, blinking, and opening the mouth, where the specified times of opening the mouth and blinking are the specified number. The system randomly scrambles the above actions, and the specified times of mouth opening and eye blinking are also random numbers.

        Through the above method, when the user performs each interactive liveness detection, the plans to be completed are completely different, and the number of mouth opening and eye blinking is also different. When the user fails to complete the detection within the specified time of the system, it will be automatically judged that the liveness detection has failed. When the user fails to log in for more than 3 times, the system will determine that the current user is at risk and lock the currently logged in user. Locked users need to be unlocked by the administrator through the administrator system.

        Through the above methods, the system is also capable of resisting and blocking video spoofing attacks.

        (3) Detailed code explanation

        a.Initialization

        The parameters that need to be initialized include: feature point detector self.predictor, self.detector, blink threshold, total number of blinks, continuous frame number threshold, continuous frame counter, current total frame number, detection random value, facial feature point index.

       Feature point detector : Through the shape_predictor_68_face_landmarks model of dlib, the 68 feature points of the face are detected and positioned, and the model needs to be loaded first. Due to the long loading time of the model, set the logical judgment. When it is not the first time to use liveness detection, use the already loaded properties to improve the initialization time.

        Facial feature point index : the index number of the current user's facial feature point

        Blink threshold, continuous frame counter : set the EAR and MAR thresholds for blinking and opening the mouth, and the continuous frame counter will be incremented by one when the current frame user's action exceeds the threshold.

       Continuous frame number threshold : When the continuous count of frames exceeds the number of frames set by the threshold and the EAR value is greater than the threshold, the action is regarded as a blink or mouth opening action.

       Total Blinks : The number of times the user needs to complete the action.

       Current total number of frames : The number of frames for liveness detection from the beginning to the current time. If the number of frames specified by the system exceeds the number of frames specified by the system, it will be judged that the liveness detection has failed.

       Detection of random values : including random number of blinks, mouth opening times, and random action sets, such as (turn head right-blink-open mouth-turn head left), (turn head right-blink-turn head left-mouth open), (blink -open mouth-turn head right-turn head left) etc.

The project structure is as follows:

 The shape_predictor_68_face_landmarks.dat file is in the data directory of the project.

# 本地活体检测
    def detect_face_local(self):
        self.detect_start_time = time()

        QApplication.processEvents()
        self.remind_label.setText("正在初始化\n请稍后")
        # 特征点检测器首次加载比较慢,通过判断减少后面加载的速度
        if self.detector is None:
            self.detector = dlib.get_frontal_face_detector()
        if self.predictor is None:
            self.predictor = dlib.shape_predictor('../data/shape_predictor_68_face_landmarks.dat')

        # 闪烁阈值
        self.EAR_THRESH = 0.25
        self.MOUTH_THRESH = 0.7

        # 总闪烁次数
        self.eye_flash_counter = 0
        self.mouth_open_counter = 0
        self.turn_left_counter = 0
        self.turn_right_counter = 0

        # 连续帧数阈值
        self.EAR_CONSTANT_FRAMES = 2
        self.MOUTH_CONSTANT_FRAMES = 2
        self.LEFT_CONSTANT_FRAMES = 4
        self.RIGHT_CONSTANT_FRAMES = 4

        # 连续帧计数器
        self.eye_flash_continuous_frame = 0
        self.mouth_open_continuous_frame = 0
        self.turn_left_continuous_frame = 0
        self.turn_right_continuous_frame = 0

        print("活体检测 初始化时间:", time() - self.detect_start_time)

        # 当前总帧数
        total_frame_counter = 0

        # 设置随机值
        now_flag = 0
        random_type = [0, 1, 2, 3]
        random.shuffle(random_type)

        random_eye_flash_number = random.randint(4, 6)
        random_mouth_open_number = random.randint(2, 4)
        QMessageBox.about(self, '提示', '请按照指示执行相关动作')
        self.remind_label.setText("")

        # 抓取面部特征点的索引
        (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
        (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
        (mStart, mEnd) = face_utils.FACIAL_LANDMARKS_IDXS["mouth"]

              b. Calculation of EAR, MAR equivalent

                Taking the eyes as an example, get the eyes, apply the formula for calculating the EAR value, and calculate the EAR value.

    # 计算眼长宽比例 EAR值
    @staticmethod
    def count_EAR(eye):
        A = dist.euclidean(eye[1], eye[5])
        B = dist.euclidean(eye[2], eye[4])
        C = dist.euclidean(eye[0], eye[3])
        EAR = (A + B) / (2.0 * C)
        return EAR

    # 计算嘴长宽比例 MAR值
    @staticmethod
    def count_MAR(mouth):
        A = dist.euclidean(mouth[1], mouth[11])
        B = dist.euclidean(mouth[2], mouth[10])
        C = dist.euclidean(mouth[3], mouth[9])
        D = dist.euclidean(mouth[4], mouth[8])
        E = dist.euclidean(mouth[5], mouth[7])
        F = dist.euclidean(mouth[0], mouth[6])  # 水平欧几里德距离
        ratio = (A + B + C + D + E) / (5.0 * F)
        return ratio

    # 计算左右脸转动比例 FR值
    @staticmethod
    def count_FR(face):
        rightA = dist.euclidean(face[0], face[27])
        rightB = dist.euclidean(face[2], face[30])
        rightC = dist.euclidean(face[4], face[48])
        leftA = dist.euclidean(face[16], face[27])
        leftB = dist.euclidean(face[14], face[30])
        leftC = dist.euclidean(face[12], face[54])
        ratioA = rightA / leftA
        ratioB = rightB / leftB
        ratioC = rightC / leftC
        ratio = (ratioA + ratioB + ratioC) / 3
        return ratio

        c. User Action Judgment

    def check_eye_flash(self, average_EAR):
        if average_EAR < self.EAR_THRESH:
            self.eye_flash_continuous_frame += 1
        else:
            if self.eye_flash_continuous_frame >= self.EAR_CONSTANT_FRAMES:
                self.eye_flash_counter += 1
            self.eye_flash_continuous_frame = 0

    def check_mouth_open(self, mouth_MAR):
        if mouth_MAR > self.MOUTH_THRESH:
            self.mouth_open_continuous_frame += 1
        else:
            if self.mouth_open_continuous_frame >= self.MOUTH_CONSTANT_FRAMES:
                self.mouth_open_counter += 1
            self.mouth_open_continuous_frame = 0

    def check_right_turn(self, leftRight_FR):
        if leftRight_FR <= 0.5:
            self.turn_right_continuous_frame += 1
        else:
            if self.turn_right_continuous_frame >= self.RIGHT_CONSTANT_FRAMES:
                self.turn_right_counter += 1
            self.turn_right_continuous_frame = 0

    def check_left_turn(self, leftRight_FR):
        if leftRight_FR >= 2.0:
            self.turn_left_continuous_frame += 1
        else:
            if self.turn_left_continuous_frame >= self.LEFT_CONSTANT_FRAMES:
                self.turn_left_counter += 1
            self.turn_left_continuous_frame = 0

        d. Liveness detection and judgment

        When the camera is turned on, liveness detection and judgment are performed. The loop is exited only when the user liveness detection succeeds or times out.

        while self.cap.isOpened():
            ret, frame = self.cap.read()
            total_frame_counter += 1
            frame = imutils.resize(frame)
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            rects = self.detector(gray, 0)

            if len(rects) == 1:
                QApplication.processEvents()
                shape = self.predictor(gray, rects[0])
                shape = face_utils.shape_to_np(shape)

                # 提取面部坐标
                left_eye = shape[lStart:lEnd]
                right_eye = shape[rStart:rEnd]
                mouth = shape[mStart:mEnd]

                # 计算长宽比
                left_EAR = self.count_EAR(left_eye)
                right_EAR = self.count_EAR(right_eye)
                mouth_MAR = self.count_MAR(mouth)
                leftRight_FR = self.count_FR(shape)
                average_EAR = (left_EAR + right_EAR) / 2.0

                # 计算左眼、右眼、嘴巴的凸包
                left_eye_hull = cv2.convexHull(left_eye)
                right_eye_hull = cv2.convexHull(right_eye)
                mouth_hull = cv2.convexHull(mouth)

                # 可视化
                cv2.drawContours(frame, [left_eye_hull], -1, (0, 255, 0), 1)
                cv2.drawContours(frame, [right_eye_hull], -1, (0, 255, 0), 1)
                cv2.drawContours(frame, [mouth_hull], -1, (0, 255, 0), 1)

                if now_flag >= 4:
                    self.remind_label.setText("")
                    QMessageBox.about(self, '提示', '已通过活体检测')
                    self.turn_right_counter = 0
                    self.mouth_open_counter = 0
                    self.eye_flash_counter = 0
                    return True

                if random_type[now_flag] == 0:
                    if self.turn_left_counter > 0:
                        now_flag += 1
                    else:
                        self.remind_label.setText("请向左摇头")
                        self.check_left_turn(leftRight_FR)
                        self.turn_right_counter = 0
                        self.mouth_open_counter = 0
                        self.eye_flash_counter = 0

                elif random_type[now_flag] == 1:
                    if self.turn_right_counter > 0:
                        now_flag += 1
                    else:
                        self.remind_label.setText("请向右摇头")
                        self.check_right_turn(leftRight_FR)
                        self.turn_left_counter = 0
                        self.mouth_open_counter = 0
                        self.eye_flash_counter = 0

                elif random_type[now_flag] == 2:
                    if self.mouth_open_counter >= random_mouth_open_number:
                        now_flag += 1

                    else:
                        self.remind_label.setText("已张嘴{}次\n还需张嘴{}次".format(self.mouth_open_counter, (
                                random_mouth_open_number - self.mouth_open_counter)))
                        self.check_mouth_open(mouth_MAR)
                        self.turn_right_counter = 0
                        self.turn_left_counter = 0
                        self.eye_flash_counter = 0

                elif random_type[now_flag] == 3:
                    if self.eye_flash_counter >= random_eye_flash_number:
                        now_flag += 1
                    else:
                        self.remind_label.setText("已眨眼{}次\n还需眨眼{}次".format(self.eye_flash_counter, (
                                random_eye_flash_number - self.eye_flash_counter)))
                        self.check_eye_flash(average_EAR)
                        self.turn_right_counter = 0
                        self.turn_left_counter = 0
                        self.mouth_open_counter = 0

            elif len(rects) == 0:
                QApplication.processEvents()
                self.remind_label.setText("没有检测到人脸!")

            elif len(rects) > 1:
                QApplication.processEvents()
                self.remind_label.setText("检测到超过一张人脸!")

            show_video = cv2.cvtColor(cv2.resize(frame, (self.WIN_WIDTH, self.WIN_HEIGHT)), cv2.COLOR_BGR2RGB)
            self.show_image = QImage(show_video.data, show_video.shape[1], show_video.shape[0], QImage.Format_RGB888)
            self.camera_label.setPixmap(QPixmap.fromImage(self.show_image))

            if total_frame_counter >= 1000.0:
                QMessageBox.about(self, '警告', '已超时,未通过活体检测')
                self.remind_label.setText("")
                return False

        (4) All codes

    # 计算眼长宽比例 EAR值
    @staticmethod
    def count_EAR(eye):
        A = dist.euclidean(eye[1], eye[5])
        B = dist.euclidean(eye[2], eye[4])
        C = dist.euclidean(eye[0], eye[3])
        EAR = (A + B) / (2.0 * C)
        return EAR

    # 计算嘴长宽比例 MAR值
    @staticmethod
    def count_MAR(mouth):
        A = dist.euclidean(mouth[1], mouth[11])
        B = dist.euclidean(mouth[2], mouth[10])
        C = dist.euclidean(mouth[3], mouth[9])
        D = dist.euclidean(mouth[4], mouth[8])
        E = dist.euclidean(mouth[5], mouth[7])
        F = dist.euclidean(mouth[0], mouth[6])  # 水平欧几里德距离
        ratio = (A + B + C + D + E) / (5.0 * F)
        return ratio

    # 计算左右脸转动比例 FR值
    @staticmethod
    def count_FR(face):
        rightA = dist.euclidean(face[0], face[27])
        rightB = dist.euclidean(face[2], face[30])
        rightC = dist.euclidean(face[4], face[48])
        leftA = dist.euclidean(face[16], face[27])
        leftB = dist.euclidean(face[14], face[30])
        leftC = dist.euclidean(face[12], face[54])
        ratioA = rightA / leftA
        ratioB = rightB / leftB
        ratioC = rightC / leftC
        ratio = (ratioA + ratioB + ratioC) / 3
        return ratio

    # 本地活体检测
    def detect_face_local(self):
        self.detect_start_time = time()

        QApplication.processEvents()
        self.remind_label.setText("正在初始化\n请稍后")
        # 特征点检测器首次加载比较慢,通过判断减少后面加载的速度
        if self.detector is None:
            self.detector = dlib.get_frontal_face_detector()
        if self.predictor is None:
            global DATA_FOLDER_PATH
            self.predictor = dlib.shape_predictor('../data/shape_predictor_68_face_landmarks.dat')

        # 闪烁阈值
        self.EAR_THRESH = 0.25
        self.MOUTH_THRESH = 0.7

        # 总闪烁次数
        self.eye_flash_counter = 0
        self.mouth_open_counter = 0
        self.turn_left_counter = 0
        self.turn_right_counter = 0

        # 连续帧数阈值
        self.EAR_CONSTANT_FRAMES = 2
        self.MOUTH_CONSTANT_FRAMES = 2
        self.LEFT_CONSTANT_FRAMES = 4
        self.RIGHT_CONSTANT_FRAMES = 4

        # 连续帧计数器
        self.eye_flash_continuous_frame = 0
        self.mouth_open_continuous_frame = 0
        self.turn_left_continuous_frame = 0
        self.turn_right_continuous_frame = 0

        print("活体检测 初始化时间:", time() - self.detect_start_time)

        # 当前总帧数
        total_frame_counter = 0

        # 设置随机值
        now_flag = 0
        random_type = [0, 1, 2, 3]
        random.shuffle(random_type)

        random_eye_flash_number = random.randint(4, 6)
        random_mouth_open_number = random.randint(2, 4)
        QMessageBox.about(self, '提示', '请按照指示执行相关动作')
        self.remind_label.setText("")

        # 抓取面部特征点的索引
        (lStart, lEnd) = face_utils.FACIAL_LANDMARKS_IDXS["left_eye"]
        (rStart, rEnd) = face_utils.FACIAL_LANDMARKS_IDXS["right_eye"]
        (mStart, mEnd) = face_utils.FACIAL_LANDMARKS_IDXS["mouth"]

        while self.cap.isOpened():
            ret, frame = self.cap.read()
            total_frame_counter += 1
            frame = imutils.resize(frame)
            gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
            rects = self.detector(gray, 0)

            if len(rects) == 1:
                QApplication.processEvents()
                shape = self.predictor(gray, rects[0])
                shape = face_utils.shape_to_np(shape)

                # 提取面部坐标
                left_eye = shape[lStart:lEnd]
                right_eye = shape[rStart:rEnd]
                mouth = shape[mStart:mEnd]

                # 计算长宽比
                left_EAR = self.count_EAR(left_eye)
                right_EAR = self.count_EAR(right_eye)
                mouth_MAR = self.count_MAR(mouth)
                leftRight_FR = self.count_FR(shape)
                average_EAR = (left_EAR + right_EAR) / 2.0

                # 计算左眼、右眼、嘴巴的凸包
                left_eye_hull = cv2.convexHull(left_eye)
                right_eye_hull = cv2.convexHull(right_eye)
                mouth_hull = cv2.convexHull(mouth)

                # 可视化
                cv2.drawContours(frame, [left_eye_hull], -1, (0, 255, 0), 1)
                cv2.drawContours(frame, [right_eye_hull], -1, (0, 255, 0), 1)
                cv2.drawContours(frame, [mouth_hull], -1, (0, 255, 0), 1)

                if now_flag >= 4:
                    self.remind_label.setText("")
                    QMessageBox.about(self, '提示', '已通过活体检测')
                    self.turn_right_counter = 0
                    self.mouth_open_counter = 0
                    self.eye_flash_counter = 0
                    return True

                if random_type[now_flag] == 0:
                    if self.turn_left_counter > 0:
                        now_flag += 1
                    else:
                        self.remind_label.setText("请向左摇头")
                        self.check_left_turn(leftRight_FR)
                        self.turn_right_counter = 0
                        self.mouth_open_counter = 0
                        self.eye_flash_counter = 0

                elif random_type[now_flag] == 1:
                    if self.turn_right_counter > 0:
                        now_flag += 1
                    else:
                        self.remind_label.setText("请向右摇头")
                        self.check_right_turn(leftRight_FR)
                        self.turn_left_counter = 0
                        self.mouth_open_counter = 0
                        self.eye_flash_counter = 0

                elif random_type[now_flag] == 2:
                    if self.mouth_open_counter >= random_mouth_open_number:
                        now_flag += 1

                    else:
                        self.remind_label.setText("已张嘴{}次\n还需张嘴{}次".format(self.mouth_open_counter, (
                                random_mouth_open_number - self.mouth_open_counter)))
                        self.check_mouth_open(mouth_MAR)
                        self.turn_right_counter = 0
                        self.turn_left_counter = 0
                        self.eye_flash_counter = 0

                elif random_type[now_flag] == 3:
                    if self.eye_flash_counter >= random_eye_flash_number:
                        now_flag += 1
                    else:
                        self.remind_label.setText("已眨眼{}次\n还需眨眼{}次".format(self.eye_flash_counter, (
                                random_eye_flash_number - self.eye_flash_counter)))
                        self.check_eye_flash(average_EAR)
                        self.turn_right_counter = 0
                        self.turn_left_counter = 0
                        self.mouth_open_counter = 0

            elif len(rects) == 0:
                QApplication.processEvents()
                self.remind_label.setText("没有检测到人脸!")

            elif len(rects) > 1:
                QApplication.processEvents()
                self.remind_label.setText("检测到超过一张人脸!")

            show_video = cv2.cvtColor(cv2.resize(frame, (self.WIN_WIDTH, self.WIN_HEIGHT)), cv2.COLOR_BGR2RGB)
            self.show_image = QImage(show_video.data, show_video.shape[1], show_video.shape[0], QImage.Format_RGB888)
            self.camera_label.setPixmap(QPixmap.fromImage(self.show_image))

            if total_frame_counter >= 1000.0:
                QMessageBox.about(self, '警告', '已超时,未通过活体检测')
                self.remind_label.setText("")
                return False

    def check_eye_flash(self, average_EAR):
        if average_EAR < self.EAR_THRESH:
            self.eye_flash_continuous_frame += 1
        else:
            if self.eye_flash_continuous_frame >= self.EAR_CONSTANT_FRAMES:
                self.eye_flash_counter += 1
            self.eye_flash_continuous_frame = 0

    def check_mouth_open(self, mouth_MAR):
        if mouth_MAR > self.MOUTH_THRESH:
            self.mouth_open_continuous_frame += 1
        else:
            if self.mouth_open_continuous_frame >= self.MOUTH_CONSTANT_FRAMES:
                self.mouth_open_counter += 1
            self.mouth_open_continuous_frame = 0

    def check_right_turn(self, leftRight_FR):
        if leftRight_FR <= 0.5:
            self.turn_right_continuous_frame += 1
        else:
            if self.turn_right_continuous_frame >= self.RIGHT_CONSTANT_FRAMES:
                self.turn_right_counter += 1
            self.turn_right_continuous_frame = 0

    def check_left_turn(self, leftRight_FR):
        if leftRight_FR >= 2.0:
            self.turn_left_continuous_frame += 1
        else:
            if self.turn_left_continuous_frame >= self.LEFT_CONSTANT_FRAMES:
                self.turn_left_counter += 1
            self.turn_left_continuous_frame = 0

continue reading:

Client side logic:

Admin side logic:

Note: The above code is for reference only. If you need to run it, please refer to the complete source code of the project GitHub:   Python | Face Recognition System - Administrator Operation

Guess you like

Origin blog.csdn.net/sun80760/article/details/130492797