Python | Face recognition system - background blur

Blog Summary: Python | Face Recognition System — Blog Index

GitHub address: Su-Face-Recognition

Note: Please refer to before reading this blog

Tool Installation, Environment Configuration: Python | Face Recognition System—Introduction

UI Interface Design: Python | Face Recognition System — UI Interface Design

UI event processing: Python | Face recognition system - UI event processing

Camera screen display: Python | Face recognition system - camera screen display

1. Judgment

        First judge whether to turn on the camera, and then judge whether self.isFineSegmentation_flag has enabled background blur. If not, call the fine_segmentation method to blur the background.

        Initialization parameters:

  • Camera address: self.cap = cv2.VideoCapture() 
  • Camera label: self.source = CAPTURE_SOURCE  
  • Camera display area width: self.WIN_WIDTH = 800 
  • The height of the camera display area: self.WIN_HEIGHT = 500 
  • Whether to enable the background blur flag: self.isFineSegmentation_flag = False

        related functions:

  • Background blur discriminator: fine_segmentation_judge()
  • Show the camera image: self.show_camera()
  • Background blur main function: fine_segmentation()
import mediapipe as mp

# 主界面
class UserMainWindow(QMainWindow, UserMainUi):

    def __init__(self, parent=None):
        super(UserMainWindow, self).__init__(parent)
        self.setupUi(self)

        self.show_image = None

        self.cap = cv2.VideoCapture()  # 相机
        self.source = CAPTURE_SOURCE  # 相机标号
        self.WIN_WIDTH = 800  # 相机展示画面宽度
        self.WIN_HEIGHT = 500  # 相机展示画面高度
        self.isFineSegmentation_flag = False  # 是否打开背景模糊标志

    ... ...

   # 背景模糊判别器
    def fine_segmentation_judge(self):
        if not self.cap.isOpened():
            QMessageBox.information(self, "提示", self.tr("请先打开摄像头"))
        else:
            if not self.isFineSegmentation_flag:
                self.isFineSegmentation_flag = True
                self.fine_segmentation_button.setText("关闭背景模糊") # 开始时按钮设置文本
                self.fine_segmentation() // 开始背景模糊
                self.fine_segmentation_button.setText("背景模糊") # 结束时按钮设置文本
                self.isFineSegmentation_flag = False

            elif self.isFineSegmentation_flag:
                self.isFineSegmentation_flag = False
                self.fine_segmentation_button.setText("背景模糊")
                self.show_camera() // 展示摄像机画面

2. Blurred background

import mediapipe as mp

    ... ...


    # 背景模糊
    def fine_segmentation(self):
        mp_selfie_segmentation = mp.solutions.selfie_segmentation
        BG_COLOR = (192, 192, 192)
        with mp_selfie_segmentation.SelfieSegmentation(model_selection=1) as selfie_segmentation:
            while self.cap.isOpened():
                ret, frame = self.cap.read()
                QApplication.processEvents()
                # 将 BGR 图像转换为 RGB
                in_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
                # 若要提高性能,可以选择将图像标记为不可写以通过引用传递。
                in_frame.flags.writeable = False
                results = selfie_segmentation.process(in_frame)

                in_frame.flags.writeable = True
                in_frame = cv2.cvtColor(in_frame, cv2.COLOR_RGB2BGR)
                # 在背景图像上绘制分割
                condition = np.stack((results.segmentation_mask,) * 3, axis=-1) > 0.1
                # 背景设置为高斯模糊
                gauss_image = cv2.GaussianBlur(in_frame, (85, 85), 0)
                if gauss_image is None:
                    gauss_image = np.zeros(in_frame.shape, dtype=np.uint8)
                    gauss_image[:] = BG_COLOR

                out_frame = np.where(condition, in_frame, gauss_image)

                show_video = cv2.cvtColor(cv2.resize(out_frame, (self.WIN_WIDTH, self.WIN_HEIGHT)), cv2.COLOR_BGR2RGB)
                self.show_image = QImage(show_video.data, show_video.shape[1], show_video.shape[0],
                                         QImage.Format_RGB888)
                self.camera_label.setPixmap(QPixmap.fromImage(self.show_image))

After reading this blog you can continue reading:

Client side logic:

Admin side logic:

Note: The above code is for reference only. To run it, refer to the GitHub source code:  Su-Face-Recognition

Guess you like

Origin blog.csdn.net/sun80760/article/details/130495172